Journal articles on the topic 'Variational graph auto-Encoder (VGAE)'

To see the other types of publications on this topic, follow the link: Variational graph auto-Encoder (VGAE).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 journal articles for your research on the topic 'Variational graph auto-Encoder (VGAE).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hui, Binyuan, Pengfei Zhu, and Qinghua Hu. "Collaborative Graph Convolutional Networks: Unsupervised Learning Meets Semi-Supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4215–22. http://dx.doi.org/10.1609/aaai.v34i04.5843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Graph convolutional networks (GCN) have achieved promising performance in attributed graph clustering and semi-supervised node classification because it is capable of modeling complex graphical structure, and jointly learning both features and relations of nodes. Inspired by the success of unsupervised learning in the training of deep models, we wonder whether graph-based unsupervised learning can collaboratively boost the performance of semi-supervised learning. In this paper, we propose a multi-task graph learning model, called collaborative graph convolutional networks (CGCN). CGCN is composed of an attributed graph clustering network and a semi-supervised node classification network. As Gaussian mixture models can effectively discover the inherent complex data distributions, a new end to end attributed graph clustering network is designed by combining variational graph auto-encoder with Gaussian mixture models (GMM-VGAE) rather than the classic k-means. If the pseudo-label of an unlabeled sample assigned by GMM-VGAE is consistent with the prediction of the semi-supervised GCN, it is selected to further boost the performance of semi-supervised learning with the help of the pseudo-labels. Extensive experiments on benchmark graph datasets validate the superiority of our proposed GMM-VGAE compared with the state-of-the-art attributed graph clustering networks. The performance of node classification is greatly improved by our proposed CGCN, which verifies graph-based unsupervised learning can be well exploited to enhance the performance of semi-supervised learning.
2

Duan, Yuning, Jingdong Jia, Yuhui Jin, Haitian Zhang, and Jian Huang. "Expressway Vehicle Trajectory Prediction Based on Fusion Data of Trajectories and Maps from Vehicle Perspective." Applied Sciences 14, no. 10 (May 15, 2024): 4181. http://dx.doi.org/10.3390/app14104181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Research on vehicle trajectory prediction based on road monitoring video data often utilizes a global map as an input, disregarding the fact that drivers rely on the road structures observable from their own positions for path planning. This oversight reduces the accuracy of prediction. To address this, we propose the CVAE-VGAE model, a novel trajectory prediction approach. Initially, our method transforms global perspective map data into vehicle-centric map data, representing it through a graph structure. Subsequently, Variational Graph Auto-Encoders (VGAEs), an unsupervised learning framework tailored for graph-structured data, are employed to extract road environment features specific to each vehicle’s location from the map data. Finally, a prediction network based on the Conditional Variational Autoencoder (CVAE) structure is designed, which first predicts the driving endpoint and then fits the complete future trajectory. The proposed CVAE-VGAE model integrates a self-attention mechanism into its encoding and decoding modules to infer endpoint intent and incorporate road environment features for precise trajectory prediction. Through a series of ablation experiments, we demonstrate the efficacy of our method in enhancing vehicle trajectory prediction metrics. Furthermore, we compare our model with traditional and frontier approaches, highlighting significant improvements in prediction accuracy.
3

Choong, Jun Jin, Xin Liu, and Tsuyoshi Murata. "Optimizing Variational Graph Autoencoder for Community Detection with Dual Optimization." Entropy 22, no. 2 (February 7, 2020): 197. http://dx.doi.org/10.3390/e22020197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Variational Graph Autoencoder (VGAE) has recently gained traction for learning representations on graphs. Its inception has allowed models to achieve state-of-the-art performance for challenging tasks such as link prediction, rating prediction, and node clustering. However, a fundamental flaw exists in Variational Autoencoder (VAE)-based approaches. Specifically, merely minimizing the loss of VAE increases the deviation from its primary objective. Focusing on Variational Graph Autoencoder for Community Detection (VGAECD) we found that optimizing the loss using the stochastic gradient descent often leads to sub-optimal community structure especially when initialized poorly. We address this shortcoming by introducing a dual optimization procedure. This procedure aims to guide the optimization process and encourage learning of the primary objective. Additionally, we linearize the encoder to reduce the number of learning parameters. The outcome is a robust algorithm that outperforms its predecessor.
4

Ma, Weigang, Jing Wang, Chaohui Zhang, Qiao Jia, Lei Zhu, Wenjiang Ji, and Zhoukai Wang. "Application of Variational Graph Autoencoder in Traction Control of Energy-Saving Driving for High-Speed Train." Applied Sciences 14, no. 5 (February 29, 2024): 2037. http://dx.doi.org/10.3390/app14052037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In a high-speed rail system, the driver repeatedly adjusts the train’s speed and traction while driving, causing a high level of energy consumption. This also leads to the instability of the train’s operation, affecting passengers’ experiences and the operational efficiency of the system. To solve this problem, we propose a variational graph auto-encoder (VGAE) model using a neural network to learn the posterior distribution. This model can effectively capture the correlation between the components of a high-speed rail system and simulate drivers’ operating state accurately. The specific traction control is divided into two parts. The first part employs an algorithm based on the K-Nearest Neighbors (KNN) algorithm and undersampling to address the negative impact of imbalanced quantities in the training dataset. The second part utilizes a variational graph autoencoder to derive the initial traction control of drivers, thereby predicting the energy performance of the drivers’ operation. An 83,786 m long high-speed train driving section is used as an example for verification. By using a confusion matrix for our comparative analysis, it was concluded that the energy consumption is approximately 18.78% less than that of manual traction control. This shows the potential and effect of the variational graph autoencoder model for optimizing energy consumption in high-speed rail systems.
5

Zhang, Jing, Guangli Wu, and Shanshan Song. "Video Summarization Generation Based on Graph Structure Reconstruction." Electronics 12, no. 23 (November 23, 2023): 4757. http://dx.doi.org/10.3390/electronics12234757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Video summarization aims to identify important segments in a video and merge them into a concise representation, enabling users to comprehend the essential information without watching the entire video. Graph structure-based video summarization approaches ignore the issue of redundant adjacency matrix. To address this issue, this paper proposes a video summary generation model based on graph structure reconstruction (VOGNet), in which the model first adopts a variational graph auto-encoders (VGAE) to reconstruct the graph structure to remove redundant information in the graph structure; followed by using the reconstructed graph structure in a graph attention network (GAT), allocating different weights to different shot features in the neighborhood; and lastly, in order to avoid the loss of information during the training of the model, a feature fusion approach is proposed to combine the training obtained shot features with the original shot features as the shot features for generating the summary. We perform extensive experiments on two standard datasets, SumMe and TVSum, and the experimental results demonstrate the effectiveness and robustness of the proposed model.
6

Zhang, Ying, Qi Zhang, Yu Zhang, and Zhiyuan Zhu. "VGAE-AMF: A Novel Topology Reconstruction Algorithm for Invulnerability of Ocean Wireless Sensor Networks Based on Graph Neural Network." Journal of Marine Science and Engineering 11, no. 4 (April 16, 2023): 843. http://dx.doi.org/10.3390/jmse11040843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ocean wireless sensor networks (OWSNs) play an important role in marine environment monitoring, underwater target tracking, and marine defense. OWSNs not only monitor the surface information in real time but also act as an important relay layer for underwater sensor networks to establish data communication between underwater sensors and ship-based base stations, land-based base stations, and satellites. The destructive resistance of OWSNs is closely related to the marine environment where they are located. Affected by the dynamics of seawater, the location of nodes is extremely easy to shift, resulting in the deterioration of the connectivity of the OWSNs and the instability of the network topology. In this paper, a novel topology optimization model of OWSNs based on the idea of link prediction by cascading variational graph auto-encoders and adaptive multilayer filter (VGAE-AMF) was proposed, which attenuates the extent of damage after the network is attacked, extracts the global features of OWSNs by graph convolutional network (GCN) to obtain the graph embedding vector of the network so as to decode and generate a new topology, and finally, an adaptive multilayer filter (AMF) is used to achieve topology control at the node level. Simulation experiment results show that the robustness index of the optimized network is improved by 39.65% and has good invulnerability to both random and deliberate attacks.
7

Patel, Neel, Nhat Le, Tan Nguyen, Fedaa Najdawi, Sandhya Srinivasan, Adam Stanford-Moore, Deeksha Kartik, et al. "Abstract 4912: Unsupervised detection of stromal phenotypes with distinct fibrogenic and inflamed properties in NSCLC." Cancer Research 84, no. 6_Supplement (March 22, 2024): 4912. http://dx.doi.org/10.1158/1538-7445.am2024-4912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Background: Understanding the composition of cancer-associated stroma (CAS) is vital, as the number and location of immune cells and fibroblasts, as well as the degree of extracellular matrix deposition, have implications for cancer progression and response to treatment, including in non-small cell lung cancer (NSCLC). Manual analysis of CAS does not fully describe the stromal milieu, especially from a spatial perspective, and is highly subjective. To this end, we have developed an unsupervised machine learning (ML) model to characterize the CAS in NSCLC from hematoxylin and eosin (H&E) stained whole slide images (WSI) at scale. Methods: PathExploreTM models were deployed to predict stromal tissue and cell types, while another ML model was used to detect collagen fibers from H&E stained WSIs from the TCGA LUAD (N=536) and LUSC (N=464) datasets. Stroma was divided into small regions (median = 0.02 mm2), and 88 features characterizing cell distribution, tissue composition and fiber density were extracted from each region. Graphs were generated connecting neighboring regions (nodes), and an unsupervised variational graph auto-encoder (VGAE) model was trained to learn 8 latent features through dimensionality reduction. Stromal phenotypes were then derived from the latent features using k-means clustering. The fraction of each phenotype in the stroma was correlated against immune- and stroma-related gene expression signatures (GES) and overall survival (OS). Results: Deployment of VGAE on LUAD and LUSC WSIs revealed three distinct stromal phenotypes - P0, P1 and P2. Fibroblast density was elevated in P0 and P1 regions (p<0.001), immune cell density was elevated in P2 regions (p<0.001), and collagen fiber intensity was highest in P1 regions (p<0.001). P2 enrichment was correlated with elevated expression of the T cell-inflamed gene expression profile (TGEP; Spearman ρ = 0.43 in LUAD; ρ = 0.27 in LUSC) and with improved OS (HR = 0.696; 95% CIs: 0.571-0.847 in LUSC). Conversely, P1 enrichment was positively associated with a transforming growth factor-β-induced cancer associated fibroblast GES (TGFβ-CAF: ρ = 0.19 in LUAD and ρ = 0.12 in LUSC) and poor OS (HR = 1.358; 95% CIs: 1.149-1.603 in LUSC). These phenotypes are consistent with fibroblast-enriched, collagen-depleted stroma (P0), collagen-rich, fibroblast-enriched tumor-promoting stroma (P1), and immune cell-enriched, tumor-suppressive stroma (P2). Conclusions: We describe an unsupervised, data-driven method of predicting stromal regions with discrete patterns of cell composition and collagen deposition in NSCLC. This approach identified three phenotypes of NSCLC stroma. These results highlight the ability of ML models to characterize and find meaningful patterns within the cell, tissue, and matrix components of a tumor. This work provides further evidence of the potential of ML to discover novel precision medicine biomarkers in NSCLC. Citation Format: Neel Patel, Nhat Le, Tan Nguyen, Fedaa Najdawi, Sandhya Srinivasan, Adam Stanford-Moore, Deeksha Kartik, Jun Zhang, Jacqueline Brosnan-Cashman, Robert Egger, Justin Lee, Matthew Bronnimann. Unsupervised detection of stromal phenotypes with distinct fibrogenic and inflamed properties in NSCLC [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2024; Part 1 (Regular Abstracts); 2024 Apr 5-10; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2024;84(6_Suppl):Abstract nr 4912.
8

Shi, Han, Haozheng Fan, and James T. Kwok. "Effective Decoding in Graph Auto-Encoder Using Triadic Closure." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 906–13. http://dx.doi.org/10.1609/aaai.v34i01.5437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The (variational) graph auto-encoder and its variants have been popularly used for representation learning on graph-structured data. While the encoder is often a powerful graph convolutional network, the decoder reconstructs the graph structure by only considering two nodes at a time, thus ignoring possible interactions among edges. On the other hand, structured prediction, which considers the whole graph simultaneously, is computationally expensive. In this paper, we utilize the well-known triadic closure property which is exhibited in many real-world networks. We propose the triad decoder, which considers and predicts the three edges involved in a local triad together. The triad decoder can be readily used in any graph-based auto-encoder. In particular, we incorporate this to the (variational) graph auto-encoder. Experiments on link prediction, node clustering and graph generation show that the use of triads leads to more accurate prediction, clustering and better preservation of the graph characteristics.
9

Behrouzi, Tina, and Dimitrios Hatzinakos. "Graph variational auto-encoder for deriving EEG-based graph embedding." Pattern Recognition 121 (January 2022): 108202. http://dx.doi.org/10.1016/j.patcog.2021.108202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhan, Junjian, Feng Li, Yang Wang, Daoyu Lin, and Guangluan Xu. "Structural Adversarial Variational Auto-Encoder for Attributed Network Embedding." Applied Sciences 11, no. 5 (March 7, 2021): 2371. http://dx.doi.org/10.3390/app11052371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As most networks come with some content in each node, attributed network embedding has aroused much research interest. Most existing attributed network embedding methods aim at learning a fixed representation for each node encoding its local proximity. However, those methods usually neglect the global information between nodes distant from each other and distribution of the latent codes. We propose Structural Adversarial Variational Graph Auto-Encoder (SAVGAE), a novel framework which encodes the network structure and node content into low-dimensional embeddings. On one hand, our model captures the local proximity and proximities at any distance of a network by exploiting a high-order proximity indicator named Rooted Pagerank. On the other hand, our method learns the data distribution of each node representation while circumvents the side effect its sampling process causes on learning a robust embedding through adversarial training. On benchmark datasets, we demonstrate that our method performs competitively compared with state-of-the-art models.
11

Xie, Luodi, Huimin Huang, and Qing Du. "A Co-Embedding Model with Variational Auto-Encoder for Knowledge Graphs." Applied Sciences 12, no. 2 (January 12, 2022): 715. http://dx.doi.org/10.3390/app12020715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Knowledge graph (KG) embedding has been widely studied to obtain low-dimensional representations for entities and relations. It serves as the basis for downstream tasks, such as KG completion and relation extraction. Traditional KG embedding techniques usually represent entities/relations as vectors or tensors, mapping them in different semantic spaces and ignoring the uncertainties. The affinities between entities and relations are ambiguous when they are not embedded in the same latent spaces. In this paper, we incorporate a co-embedding model for KG embedding, which learns low-dimensional representations of both entities and relations in the same semantic space. To address the issue of neglecting uncertainty for KG components, we propose a variational auto-encoder that represents KG components as Gaussian distributions. In addition, compared with previous methods, our method has the advantages of high quality and interpretability. Our experimental results on several benchmark datasets demonstrate our model’s superiority over the state-of-the-art baselines.
12

fathy,, Asmaa Mohamed. "Deep Embedding Data Fusion Scheme Using Variational Graph Auto-Encoder in IoT Environments." International Journal of Advanced Trends in Computer Science and Engineering 9, no. 4 (August 25, 2020): 4363–72. http://dx.doi.org/10.30534/ijatcse/2020/28942020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zhao, Yuexuan, and Jing Huang. "Dirichlet Process Prior for Student’s t Graph Variational Autoencoders." Future Internet 13, no. 3 (March 16, 2021): 75. http://dx.doi.org/10.3390/fi13030075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Graph variational auto-encoder (GVAE) is a model that combines neural networks and Bayes methods, capable of deeper exploring the influential latent features of graph reconstruction. However, several pieces of research based on GVAE employ a plain prior distribution for latent variables, for instance, standard normal distribution (N(0,1)). Although this kind of simple distribution has the advantage of convenient calculation, it will also make latent variables contain relatively little helpful information. The lack of adequate expression of nodes will inevitably affect the process of generating graphs, which will eventually lead to the discovery of only external relations and the neglect of some complex internal correlations. In this paper, we present a novel prior distribution for GVAE, called Dirichlet process (DP) construction for Student’s t (St) distribution. The DP allows the latent variables to adapt their complexity during learning and then cooperates with heavy-tailed St distribution to approach sufficient node representation. Experimental results show that this method can achieve a relatively better performance against the baselines.
14

Yao, Heng, Jihong Guan, and Tianying Liu. "Denoising Protein–Protein interaction network via variational graph auto-encoder for protein complex detection." Journal of Bioinformatics and Computational Biology 18, no. 03 (June 2020): 2040010. http://dx.doi.org/10.1142/s0219720020400107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Identifying protein complexes is an important issue in computational biology, as it benefits the understanding of cellular functions and the design of drugs. In the past decades, many computational methods have been proposed by mining dense subgraphs in Protein–Protein Interaction Networks (PINs). However, the high rate of false positive/negative interactions in PINs prevents accurately detecting complexes directly from the raw PINs. In this paper, we propose a denoising approach for protein complex detection by using variational graph auto-encoder. First, we embed a PIN to vector space by a stacked graph convolutional network (GCN), then decide which interactions in the PIN are credible. If the probability of an interaction being credible is less than a threshold, we delete the interaction. In such a way, we reconstruct a reliable PIN. Following that, we detect protein complexes in the reconstructed PIN by using several typical detection methods, including CPM, Coach, DPClus, GraphEntropy, IPCA and MCODE, and compare the results with those obtained directly from the original PIN. We conduct the empirical evaluation on four yeast PPI datasets (Gavin, Krogan, DIP and Wiphi) and two human PPI datasets (Reactome and Reactomekb), against two yeast complex benchmarks (CYC2008 and MIPS) and three human complex benchmarks (REACT, REACT_uniprotkb and CORE_COMPLEX_human), respectively. Experimental results show that with the reconstructed PINs obtained by our denoising approach, complex detection performance can get obviously boosted, in most cases by over 5%, sometimes even by 200%. Furthermore, we compare our approach with two existing denoising methods (RWS and RedNemo) while varying different matching rates on separate complex distributions. Our results show that in most cases (over 2/3), the proposed approach outperforms the existing methods.
15

Zhou, Qiang, Xinjiang Lu, Jingjing Gu, Zhe Zheng, Bo Jin, and Jingbo Zhou. "Explainable Origin-Destination Crowd Flow Interpolation via Variational Multi-Modal Recurrent Graph Auto-Encoder." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 9422–30. http://dx.doi.org/10.1609/aaai.v38i8.28796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Origin-destination (OD) crowd flow, if more accurately inferred at a fine-grained level, has the potential to enhance the efficacy of various urban applications. While in practice for mining OD crowd flow with effect, the problem of spatially interpolating OD crowd flow occurs since the ineluctable missing values. This problem is further complicated by the inherent scarcity and noise nature of OD crowd flow data. In this paper, we propose an uncertainty-aware interpolative and explainable framework, namely UApex, for realizing reliable and trustworthy OD crowd flow interpolation. Specifically, we first design a Variational Multi-modal Recurrent Graph Auto-Encoder (VMR-GAE) for uncertainty-aware OD crowd flow interpolation. A key idea here is to formulate the problem as semi-supervised learning on directed graphs. Next, to mitigate the data scarcity, we incorporate a distribution alignment mechanism that can introduce supplementary modals into variational inference. Then, a dedicated decoder with a Poisson prior is proposed for OD crowd flow interpolation. Moreover, to make VMR-GAE more trustworthy, we develop an efficient and uncertainty-aware explainer that can provide explanations from the spatiotemporal topology perspective via the Shapley value. Extensive experiments on two real-world datasets validate that VMR-GAE outperforms the state-of-the-art baselines. Also, an exploratory empirical study shows that the proposed explainer can generate meaningful spatiotemporal explanations.
16

Karimi, Mostafa, Arman Hasanzadeh, and Yang Shen. "Network-principled deep generative models for designing drug combinations as graph sets." Bioinformatics 36, Supplement_1 (July 1, 2020): i445—i454. http://dx.doi.org/10.1093/bioinformatics/btaa317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Motivation Combination therapy has shown to improve therapeutic efficacy while reducing side effects. Importantly, it has become an indispensable strategy to overcome resistance in antibiotics, antimicrobials and anticancer drugs. Facing enormous chemical space and unclear design principles for small-molecule combinations, computational drug-combination design has not seen generative models to meet its potential to accelerate resistance-overcoming drug combination discovery. Results We have developed the first deep generative model for drug combination design, by jointly embedding graph-structured domain knowledge and iteratively training a reinforcement learning-based chemical graph-set designer. First, we have developed hierarchical variational graph auto-encoders trained end-to-end to jointly embed gene–gene, gene–disease and disease–disease networks. Novel attentional pooling is introduced here for learning disease representations from associated genes’ representations. Second, targeting diseases in learned representations, we have recast the drug-combination design problem as graph-set generation and developed a deep learning-based model with novel rewards. Specifically, besides chemical validity rewards, we have introduced novel generative adversarial award, being generalized sliced Wasserstein, for chemically diverse molecules with distributions similar to known drugs. We have also designed a network principle-based reward for disease-specific drug combinations. Numerical results indicate that, compared to state-of-the-art graph embedding methods, hierarchical variational graph auto-encoder learns more informative and generalizable disease representations. Results also show that the deep generative models generate drug combinations following the principle across diseases. Case studies on four diseases show that network-principled drug combinations tend to have low toxicity. The generated drug combinations collectively cover the disease module similar to FDA-approved drug combinations and could potentially suggest novel systems pharmacology strategies. Our method allows for examining and following network-based principle or hypothesis to efficiently generate disease-specific drug combinations in a vast chemical combinatorial space. Availability and implementation https://github.com/Shen-Lab/Drug-Combo-Generator. Supplementary information Supplementary data are available at Bioinformatics online.
17

Su, Hang, Xinzheng Zhang, Yuqing Luo, Ce Zhang, Xichuan Zhou, and Peter M. Atkinson. "Nonlocal feature learning based on a variational graph auto-encoder network for small area change detection using SAR imagery." ISPRS Journal of Photogrammetry and Remote Sensing 193 (November 2022): 137–49. http://dx.doi.org/10.1016/j.isprsjprs.2022.09.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Xu, Lei, Leiming Xia, Shourun Pan, and Zhen Li. "Triple Generative Self-Supervised Learning Method for Molecular Property Prediction." International Journal of Molecular Sciences 25, no. 7 (March 28, 2024): 3794. http://dx.doi.org/10.3390/ijms25073794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Molecular property prediction is an important task in drug discovery, and with help of self-supervised learning methods, the performance of molecular property prediction could be improved by utilizing large-scale unlabeled dataset. In this paper, we propose a triple generative self-supervised learning method for molecular property prediction, called TGSS. Three encoders including a bi-directional long short-term memory recurrent neural network (BiLSTM), a Transformer, and a graph attention network (GAT) are used in pre-training the model using molecular sequence and graph structure data to extract molecular features. The variational auto encoder (VAE) is used for reconstructing features from the three models. In the downstream task, in order to balance the information between different molecular features, a feature fusion module is added to assign different weights to each feature. In addition, to improve the interpretability of the model, atomic similarity heat maps were introduced to demonstrate the effectiveness and rationality of molecular feature extraction. We demonstrate the accuracy of the proposed method on chemical and biological benchmark datasets by comparative experiments.
19

Du, Bing, Xiaomu Cheng, Yiping Duan, and Huansheng Ning. "fMRI Brain Decoding and Its Applications in Brain–Computer Interface: A Survey." Brain Sciences 12, no. 2 (February 7, 2022): 228. http://dx.doi.org/10.3390/brainsci12020228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Brain neural activity decoding is an important branch of neuroscience research and a key technology for the brain–computer interface (BCI). Researchers initially developed simple linear models and machine learning algorithms to classify and recognize brain activities. With the great success of deep learning on image recognition and generation, deep neural networks (DNN) have been engaged in reconstructing visual stimuli from human brain activity via functional magnetic resonance imaging (fMRI). In this paper, we reviewed the brain activity decoding models based on machine learning and deep learning algorithms. Specifically, we focused on current brain activity decoding models with high attention: variational auto-encoder (VAE), generative confrontation network (GAN), and the graph convolutional network (GCN). Furthermore, brain neural-activity-decoding-enabled fMRI-based BCI applications in mental and psychological disease treatment are presented to illustrate the positive correlation between brain decoding and BCI. Finally, existing challenges and future research directions are addressed.
20

Wang, Lei, Zejian Yuan, and Badong Chen. "Learning to Generate an Unbiased Scene Graph by Using Attribute-Guided Predicate Features." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (June 26, 2023): 2581–89. http://dx.doi.org/10.1609/aaai.v37i2.25356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Scene Graph Generation (SGG) aims to capture the semantic information in an image and build a structured representation, which facilitates downstream tasks. The current challenge in SGG is to tackle the biased predictions caused by the long-tailed distribution of predicates. Since multiple predicates in SGG are coupled in an image, existing data re-balancing methods cannot completely balance the head and tail predicates. In this work, a decoupled learning framework is proposed for unbiased scene graph generation by using attribute-guided predicate features to construct a balanced training set. Specifically, the predicate recognition is decoupled into Predicate Feature Representation Learning (PFRL) and predicate classifier training with a class-balanced predicate feature set, which is constructed by our proposed Attribute-guided Predicate Feature Generation (A-PFG) model. In the A-PFG model, we first define the class labels of and corresponding visual feature as attributes to describe a predicate. Then the predicate feature and the attribute embedding are mapped into a shared hidden space by a dual Variational Auto-encoder (VAE), and finally the synthetic predicate features are forced to learn the contextual information in the attributes via cross reconstruction and distribution alignment. To demonstrate the effectiveness of our proposed method, our decoupled learning framework and A-PFG model are applied to various SGG models. The empirical results show that our method is substantially improved on all benchmarks and achieves new state-of-the-art performance for unbiased scene graph generation. Our code is available at https://github.com/wanglei0618/A-PFG.
21

Mao, Cunli, Haoyuan Liang, Zhengtao Yu, Yuxin Huang, and Junjun Guo. "A Clustering Method of Case-Involved News by Combining Topic Network and Multi-Head Attention Mechanism." Sensors 21, no. 22 (November 11, 2021): 7501. http://dx.doi.org/10.3390/s21227501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Finding the news of same case from the large numbers of case-involved news is an important basis for public opinion analysis. Existing text clustering methods usually based on topic models which only use topic and case infomation as the global features of documents, so distinguishing between different cases with similar types remains a challenge. The contents of documents contain rich local features. Taking into account the internal features of news, the information of cases and the contributions provided by different topics, we propose a clustering method of case-involved news, which combines topic network and multi-head attention mechanism. Using case information and topic information to construct a topic network, then extracting the global features by graph convolution network, thus realizing the combination of case information and topic information. At the same time, the local features are extracted by multi-head attention mechanism. Finally, the fusion of global features and local features is realized by variational auto-encoder, and the learned latent representations are used for clustering. The experiments show that the proposed method significantly outperforms the state-of-the-art unsupervised clustering methods.
22

Zhao, Mingle, Dingfu Zhou, Xibin Song, Xiuwan Chen, and Liangjun Zhang. "DiT-SLAM: Real-Time Dense Visual-Inertial SLAM with Implicit Depth Representation and Tightly-Coupled Graph Optimization." Sensors 22, no. 9 (April 28, 2022): 3389. http://dx.doi.org/10.3390/s22093389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recently, generating dense maps in real-time has become a hot research topic in the mobile robotics community, since dense maps can provide more informative and continuous features compared with sparse maps. Implicit depth representation (e.g., the depth code) derived from deep neural networks has been employed in the visual-only or visual-inertial simultaneous localization and mapping (SLAM) systems, which achieve promising performances on both camera motion and local dense geometry estimations from monocular images. However, the existing visual-inertial SLAM systems combined with depth codes are either built on a filter-based SLAM framework, which can only update poses and maps in a relatively small local time window, or based on a loosely-coupled framework, while the prior geometric constraints from the depth estimation network have not been employed for boosting the state estimation. To well address these drawbacks, we propose DiT-SLAM, a novel real-time Dense visual-inertial SLAM with implicit depth representation and Tightly-coupled graph optimization. Most importantly, the poses, sparse maps, and low-dimensional depth codes are optimized with the tightly-coupled graph by considering the visual, inertial, and depth residuals simultaneously. Meanwhile, we propose a light-weight monocular depth estimation and completion network, which is combined with attention mechanisms and the conditional variational auto-encoder (CVAE) to predict the uncertainty-aware dense depth maps from more low-dimensional codes. Furthermore, a robust point sampling strategy introducing the spatial distribution of 2D feature points is also proposed to provide geometric constraints in the tightly-coupled optimization, especially for textureless or featureless cases in indoor environments. We evaluate our system on open benchmarks. The proposed methods achieve better performances on both the dense depth estimation and the trajectory estimation compared to the baseline and other systems.
23

Li, Peng, Shufang Guo, Chenghao Zhang, Mosharaf Md Parvej, and Jing Zhang. "A Construction Method for a Dynamic Weighted Protein Network Using Multi-Level Embedding." Applied Sciences 14, no. 10 (May 11, 2024): 4090. http://dx.doi.org/10.3390/app14104090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The rapid development of high-throughput technology has generated a large amount of protein–protein interaction (PPI) data, which provide a large amount of data support for constructing dynamic protein–protein interaction networks (PPINs). Constructing dynamic PPINs and applying them to recognize protein complexes has become a hot research topic. Most existing methods for complex recognition cannot fully mine the information of PPINs. To address this problem, we propose a construction method of dynamic weighted protein network by multi-level embedding (DWPNMLE). It can reflect the protein network’s dynamics and the protein network’s higher-order proximity. Firstly, the protein active period is calculated to divide the protein subnetworks at different time points. Then, the connection probability is used for the proteins possessing the same time points to judge whether there is an interaction relationship between them. Then, the corresponding protein subnetworks (multiple adjacency matrices) are constructed. Secondly, the multiple feature matrices are constructed using one-hot coding with the gene ontology (GO) information. Next, the first embedding is performed using variational graph auto-encoders (VGAEs) to aggregate features efficiently, followed by the second embedding using deep attributed network embedding (DANE) to strengthen the node representations learned in the first embedding and to maintain the first-order and higher-order proximity of the original network; finally, we compute the cosine similarity to obtain the final dynamic weighted PPIN. To evaluate the effectiveness of DWPNMLE, we apply four classical protein-complex-recognition algorithms on the DWPNMLE and compare them with two other dynamic protein network construction methods. The experimental results demonstrate that DWPNMLE significantly enhances the accuracy of complex recognition with high robustness, and the algorithms’ efficiency is also within a reasonable range.
24

Zhu, Guixiang, Jie Cao, Lei Chen, Youquan Wang, Zhan Bu, Shuxin Yang, Jianqing Wu, and Zhiping Wang. "A Multi-task Graph Neural Network with Variational Graph Auto-Encoders for Session-based Travel Packages Recommendation." ACM Transactions on the Web, February 2023. http://dx.doi.org/10.1145/3577032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Session-based travel packages recommendation aims to predict users’ next click based on their current and historical sessions recorded by Online Travel Agencies (OTA). Recently, an increasing number of studies attempted to apply Graph Neural Networks (GNN) to the session-based recommendation and obtained promising results. However, most of them do not take full advantage of the explicit latent structure from attributes of items, making learned representations of items less effective and difficult to interpret. Moreover, they only combine historical sessions (long-term preferences) with a current session (short-term preference) to learn a unified representation of users, ignoring the effects of historical sessions for the current session. To this end, this paper proposes a novel session-based model named STR-VGAE, which fills subtasks of the travel packages recommendation and variational graph auto-encoders simultaneously. STR-VGAE mainly consists of three components: travel packages encoder , users behaviors encoder , and interaction modeling . Specifically, the travel packages encoder module is used to learn a unified travel package representation from co-occurrence attribute graphs by using multi-view variational graph auto-encoders and a multi-view attention network. The users behaviors encoder module is used to encode user’ historical and current sessions with a personalized GNN, which considers the effects of historical sessions on the current session, and coalesce these two kinds of session representations to learn the high-quality users’ representations by exploiting a gated fusion approach. The interaction modeling module is used to calculate recommendation scores over all candidate travel packages. Extensive experiments on a real-life tourism e-commerce dataset from China show that STR-VGAE yields significant performance advantages over several competitive methods, meanwhile provides an interpretation for the generated recommendation list.
25

Li, Dongjie, Dong Li, and Guang Lian. "Variational Graph Autoencoder with Adversarial Mutual Information Learning for Network Representation Learning." ACM Transactions on Knowledge Discovery from Data, August 22, 2022. http://dx.doi.org/10.1145/3555809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the success of Graph Neural Network (GNN) in network data, some GNN-based representation learning methods for networks have emerged recently. Variational Graph Autoencoder (VGAE) is a basic GNN framework for network representation. Its purpose is to well preserve the topology and node attribute information of the network to learn node representation, but it only reconstructs network topology, and doesn’t consider the reconstruction of node features. This strategy will make node representation can not well reserve node features information, impairing the ability of the VGAE method to learn higher quality representations. To solve this problem, we arise a new network representation method to improve the VGAE method for well retaining both node features and network structure information. The method utilizes adversarial mutual information learning to maximize the mutual information (MI) of node features and node representations during the encoding process of the variational autoencoder, which forces the variational encoder to get the representation containing the most informative node features. The method consists of three parts: a variational graph autoencoder includes a variational encoder(MI generator ( G )) and a decoder, a positive MI sample module (maximizing MI module), and an MI discriminator( D ). Furthermore, we explain why maximizing MI between node features and node representation can reconstruct node attributes. Finally, we conduct experiments on seven public representative datasets for nodes classification, nodes clustering, and graph visualization tasks. Experimental results demonstrate that the proposed algorithm significantly outperforms current popular network representation algorithms on these tasks. The best improvement is \(17.13\% \) than the VGAE method.
26

Yuan, Wei, Shiyu Zhao, Li Wang, Lijia Cai, and Yong Zhang. "Online course evaluation model based on graph auto-encoder." Intelligent Data Analysis, March 21, 2024, 1–23. http://dx.doi.org/10.3233/ida-230557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the post-epidemic era, online learning has gained increasing attention due to the advancements in information and big data technology, leading to large-scale online course data with various student behaviors. Online data mining has become a popular and important way of extracting valuable insights from large amounts of data. However, previous online course analysis methods often focused on individual aspects of the data and neglected the correlation among the large-scale learning behavior data, which can lead to an incomplete understanding of the overall learning behavior and patterns within the online course. To solve the problems, this paper proposes an online course evaluation model based on a graph auto-encoder. In our method, the features of collected online course data are used to construct K-Nearest Neighbor(KNN) graphs to represent the association among the courses. Then the variational graph auto-encoder(VGAE) is introduced to learn the useful implicit features. Finally, we feed the learned implicit features into unsupervised and semi-supervised downstream tasks for online course evaluation, respectively. We conduct experiments on two datasets. In the clustering task, our method showed a more than tenfold increase in the Calinski-Harabasz index compared to unoptimized features, demonstrating significant structural distinction and group coherence. In the classification task, compared to traditional methods, our model exhibited an overall performance improvement of about 10%, indicating its effectiveness in handling complex network data.
27

Li, Dongjie, Dong Li, and Guang Lian. "Variational Graph Autoencoder with Mutual Information Maximization for Graph Representations Learning." International Journal of Pattern Recognition and Artificial Intelligence, June 8, 2022. http://dx.doi.org/10.1142/s0218001422520127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Graph neural network (GNN) is a powerful representation learning framework for graph-structured data. Some GNN-based graph embedding methods, including variational graph autoencoder (VGAE), have been presented recently. However, existing VGAE-based methods typically focus on reconstructing the adjacent matrix, i.e. topological structure, instead of the node features matrix, this strategy makes graphical features difficult to be fully learned, which weakens and restricts the capacity of a generative network to learn higher-quality representations. To address the issue, we use a contrastive estimator on the representation mechanism, i.e. on the encoding process under the framework of VGAE. In particular, we maximize the mutual information (MI) between encoded latent representation and node attributes which acts as a regularizer forcing the encoder to select the most informative with respect to the node attributes. Additionally, we also solve another key question how to effectively estimate the mutual information by drawing samples from the joint and marginal, and explain why the maximization of MI can contribute to the encoder obtaining more node feature information. Ultimately, extensive experiments on three citation networks and four web-age networks show that our method outperforms contemporary popular algorithms (such as DGI) on node classifications and clustering tasks, and the best result is an [Formula: see text] increase on node clustering task.
28

Iwata, Hiroaki, Taichi Nakai, Takuto Koyama, Shigeyuki Matsumoto, Ryosuke Kojima, and Yasushi Okuno. "VGAE-MCTS: A New Molecular Generative Model Combining the Variational Graph Auto-Encoder and Monte Carlo Tree Search." Journal of Chemical Information and Modeling, November 22, 2023. http://dx.doi.org/10.1021/acs.jcim.3c01220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Zhi, Yang Chen, Feng Xia, Jixin Bian, Bing Zhu, Guojiang Shen, and Xiangjie Kong. "TAP: Traffic Accident Profiling via Multi-task Spatio-Temporal Graph Representation Learning." ACM Transactions on Knowledge Discovery from Data, September 22, 2022. http://dx.doi.org/10.1145/3564594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Predicting traffic accidents can help traffic management departments respond to sudden traffic situations promptly, improve drivers’ vigilance, and reduce losses caused by traffic accidents. However, the causality of traffic accidents is complex and difficult to analyze. Most existing traffic accident prediction methods do not consider the dynamic spatio-temporal correlation of traffic data, which leads to unsatisfactory prediction accuracy. To address this issue, we propose a multi-task learning framework (TAP) based on the Spatio-temporal Variational Graph Auto-Encoders (ST-VGAE) for traffic accident profiling. We firstly capture the dynamic spatio-temporal correlation of traffic conditions through a spatio-temporal graph convolutional encoder and embed it as a low-latitude vector. Then we use a multi-task learning scheme to combine external factors to generate the traffic accident profiling. Furthermore, we propose a traffic accident profiling application framework based on edge computing. This method increases the speed of calculation by offloading the calculation of traffic accident profiling to edge nodes. Finally, the experimental results on real datasets demonstrate that TAP outperforms other state-of-the-art baselines.
30

Li, Bo, Chen Peng, Zeran You, Xiaolong Zhang, and Shihua Zhang. "Single-cell RNA-sequencing data clustering using variational graph attention auto-encoder with self-supervised leaning." Briefings in Bioinformatics 24, no. 6 (September 22, 2023). http://dx.doi.org/10.1093/bib/bbad383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The emergence of single-cell RNA-seq (scRNA-seq) technology makes it possible to capture their differences at the cellular level, which contributes to studying cell heterogeneity. By extracting, amplifying and sequencing the genome at the individual cell level, scRNA-seq can be used to identify unknown or rare cell types as well as genes differentially expressed in specific cell types under different conditions using clustering for downstream analysis of scRNA-seq. Many clustering algorithms have been developed with much progress. However, scRNA-seq often appears with characteristics of high dimensions, sparsity and even the case of dropout events’, which make the performance of scRNA-seq data clustering unsatisfactory. To circumvent the problem, a new deep learning framework, termed variational graph attention auto-encoder (VGAAE), is constructed for scRNA-seq data clustering. In the proposed VGAAE, a multi-head attention mechanism is introduced to learn more robust low-dimensional representations for the original scRNA-seq data and then self-supervised learning is also recommended to refine the clusters, whose number can be automatically determined using Jaccard index. Experiments have been conducted on different datasets and results show that VGAAE outperforms some other state-of-the-art clustering methods.
31

Duy Nguyen, Viet Thanh, and Truong Son Hy. "Multimodal pretraining for unsupervised protein representation learning." Biology Methods and Protocols, June 18, 2024. http://dx.doi.org/10.1093/biomethods/bpae043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Proteins are complex biomolecules essential for numerous biological processes, making them crucial targets for advancements in molecular biology, medical research, and drug design. Understanding their intricate, hierarchical structures and functions is vital for progress in these fields. To capture this complexity, we introduce MPRL—Multimodal Protein Representation Learning, a novel framework for symmetry-preserving multimodal pretraining that learns unified, unsupervised protein representations by integrating primary and tertiary structures. MPRL employs Evolutionary Scale Modeling (ESM-2) for sequence analysis, Variational Graph Auto-Encoders (VGAE) for residue-level graphs, and PointNet Autoencoder (PAE) for 3D point clouds of atoms, each designed to capture the spatial and evolutionary intricacies of proteins while preserving critical symmetries. By leveraging Auto-Fusion to synthesize joint representations from these pretrained models, MPRL ensures robust and comprehensive protein representations. Our extensive evaluation demonstrates that MPRL significantly enhances performance in various tasks such as protein-ligand binding affinity prediction, protein fold classification, enzyme activity identification, and mutation stability prediction. This framework advances the understanding of protein dynamics and facilitates future research in the field. Our source code is publicly available at https://github.com/HySonLab/Protein_Pretrain.
32

Yi, Jing, and Zhenzhong Chen. "Multi-modal Variational Graph Auto-encoder for Recommendation Systems." IEEE Transactions on Multimedia, 2021, 1. http://dx.doi.org/10.1109/tmm.2021.3111487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Mrabah, Nairouz, Mohamed Bouguessa, and Riadh Ksantini. "A contrastive variational graph auto-encoder for node clustering." Pattern Recognition, December 2023, 110209. http://dx.doi.org/10.1016/j.patcog.2023.110209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Yi, Yiwen Zhang, Dengcheng Yan, Shuiguang Deng, and Yun Yang. "Revisiting Graph-based Recommender Systems from the Perspective of Variational Auto-Encoder." ACM Transactions on Information Systems, December 2022. http://dx.doi.org/10.1145/3573385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Graph-based recommender system has attracted widespread attention and produced a series of research results. Because of the powerful high-order connection modeling capabilities of the Graph Neural Network (GNN), the performance of these graph-based recommender systems are far superior to those of traditional neural network-based collaborative filtering models. However, from both analytical and empirical perspectives, the apparent performance improvement is accompanied with a significant time overhead, which is noticeable in large-scale graph topologies. More importantly, the intrinsic data-sparsity problem substantially limits the performance of graph-based recommender systems, which compelled us to revisit graph-based recommendation from a novel perspective. In this paper, we focus on analyzing the time complexity of graph-based recommender systems to make it more suitable for real large-scale application scenarios. We propose a novel end-to-end graph recommendation model called the Collaborative Variational Graph Auto-Encoder (CVGA), which uses the information propagation and aggregation paradigms to encode user–item collaborative relationships on the user–item interaction bipartite graph. These relationships are utilized to infer the probability distribution of user behavior for parameter estimation rather than learning user or item embeddings. By doing so, we reconstruct the whole user–item interaction graph according to the known probability distribution in a feasible and elegant manner. From the perspective of the graph auto-encoder, we convert the graph recommendation task into a graph generation problem and are able to do it with approximately linear time complexity. Extensive experiments on four real-world benchmark datasets demonstrate that CVGA can be trained at a faster speed while maintaining comparable performance over state-of-the-art baselines for graph-based recommendation tasks. Further analysis shows that CVGA can effectively mitigate the data sparsity problem and performs equally well on large-scale datasets.
35

Zhou, Xin, and Chunyan Miao. "Disentangled Graph Variational Auto-Encoder for Multimodal Recommendation With Interpretability." IEEE Transactions on Multimedia, 2024, 1–13. http://dx.doi.org/10.1109/tmm.2024.3369875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Yi, Jing, Xubin Ren, and Zhenzhong Chen. "Multi-Auxiliary Augmented Collaborative Variational Auto-encoder for Tag Recommendation." ACM Transactions on Information Systems, January 31, 2023. http://dx.doi.org/10.1145/3578932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recommending appropriate tags to items can facilitate content organization, retrieval, consumption and other applications, where hybrid tag recommender systems have been utilized to integrate collaborative information and content information for better recommendations. In this paper, we propose a multi-auxiliary augmented collaborative variational auto-encoder (MA-CVAE) for tag recommendation, which couples item collaborative information and item multi-auxiliary information, i.e., content and social graph, by defining a generative process. Specifically, the model learns deep latent embeddings from different item auxiliary information using variational auto-encoders (VAE), which could form a generative distribution over each auxiliary information by introducing a latent variable parameterized by deep neural network. Moreover, to recommend tags for new items, item multi-auxiliary latent embeddings are utilized as a surrogate through the item decoder for predicting recommendation probabilities of each tag, where reconstruction losses are added in the training phase to constrain the generation for feedback predictions via different auxiliary embeddings. In addition, an inductive variational graph auto-encoder is designed to infer latent embeddings of new items in the test phase, such that item social information could be exploited for new items. Extensive experiments on MovieLens and citeulike datasets demonstrate the effectiveness of our method.
37

Chen, Han, Hanchen Wang, Hongmei Chen, Ying Zhang, Wenjie Zhang, and Xuemin Lin. "Denoising Variational Graph of Graphs Auto-Encoder for Predicting Structured Entity Interactions." IEEE Transactions on Knowledge and Data Engineering, 2023, 1–14. http://dx.doi.org/10.1109/tkde.2023.3298490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zhu, Yuan, Feng Zhang, Shihua Zhang, and Ming Yi. "Predicting latent lncRNA and cancer metastatic event associations via variational graph auto-encoder." Methods, January 2023. http://dx.doi.org/10.1016/j.ymeth.2023.01.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Gervits, Asia, and Roded Sharan. "Predicting genetic interactions, cell line dependencies and drug sensitivities with variational graph auto-encoder." Frontiers in Bioinformatics 2 (December 2, 2022). http://dx.doi.org/10.3389/fbinf.2022.1025783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Large scale cancer genomics data provide crucial information about the disease and reveal points of intervention. However, systematic data have been collected in specific cell lines and their collection is laborious and costly. Hence, there is a need to develop computational models that can predict such data for any genomic context of interest. Here we develop novel models that build on variational graph auto-encoders and can integrate diverse types of data to provide high quality predictions of genetic interactions, cell line dependencies and drug sensitivities, outperforming previous methods. Our models, data and implementation are available at: https://github.com/aijag/drugGraphNet.
40

Ding, Yulian, Xiujuan Lei, Bo Liao, and Fangxiang Wu. "Predicting miRNA-Disease Associations Based on Multi-View Variational Graph Auto-Encoder with Matrix Factorization." IEEE Journal of Biomedical and Health Informatics, 2021, 1. http://dx.doi.org/10.1109/jbhi.2021.3088342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Fu, Yao, Runtao Yang, and Lina Zhang. "Association prediction of CircRNAs and diseases using multi-homogeneous graphs and variational graph auto-encoder." Computers in Biology and Medicine, November 2022, 106289. http://dx.doi.org/10.1016/j.compbiomed.2022.106289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Aftab, Rukhma, Yan Qiang, Juanjuan Zhao, Zia Urrehman, and Zijuan Zhao. "Graph Neural Network for representation learning of lung cancer." BMC Cancer 23, no. 1 (October 26, 2023). http://dx.doi.org/10.1186/s12885-023-11516-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThe emergence of image-based systems to improve diagnostic pathology precision, involving the intent to label sets or bags of instances, greatly hinges on Multiple Instance Learning for Whole Slide Images(WSIs). Contemporary works have shown excellent performance for a neural network in MIL settings. Here, we examine a graph-based model to facilitate end-to-end learning and sample suitable patches using a tile-based approach. We propose MIL-GNN to employ a graph-based Variational Auto-encoder with a Gaussian mixture model to discover relations between sample patches for the purposes to aggregate patch details into an individual vector representation. Using the classical MIL dataset MUSK and distinguishing two lung cancer sub-types, lung cancer called adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC), we exhibit the efficacy of our technique. We achieved a 97.42% accuracy on the MUSK dataset and a 94.3% AUC on the classification of lung cancer sub-types utilizing features.
43

Ngo, Nhat Khang, and Truong Son Hy. "Multimodal Protein Representation Learning and Target-aware Variational Auto-encoders for Protein-binding Ligand Generation." Machine Learning: Science and Technology, April 15, 2024. http://dx.doi.org/10.1088/2632-2153/ad3ee4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Without knowledge of specific pockets, generating ligands based on the global structure of a protein target plays a crucial role in drug discovery as it helps reduce the search space for potential drug-like candidates in the pipeline. However, contemporary methods require optimizing tailored networks for each protein, which is arduous and costly. To address this issue, we introduce TargetVAE, a target-aware variational auto-encoder that generates ligands with desirable properties including high binding affinity and high synthesizability to arbitrary target proteins, guided by a multimodal deep neural network built based on geometric and sequence models, named Protein Multimodal Network (PMN), as the prior for the generative model. PMN unifies different representations of proteins (e.g., primary structure - sequence of amino acids, 3D tertiary structure, and residue-level graph) into a single representation. Our multimodal architecture learns from the entire protein structure and is able to capture their sequential, topological, and geometrical information by utilizing language modeling, graph neural networks, and geometric deep learning. We showcase the superiority of our approach by conducting extensive experiments and evaluations, including predicting protein-ligand binding affinity in the PBDBind v2020 dataset as well as the assessment of generative model quality, ligand generation for unseen targets, and docking score computation. Empirical results demonstrate the promising and competitive performance of our proposed approach. Our software package is publicly available at https://github.com/HySonLab/Ligand_Generation
44

Zhang, Yihao, Yuhao Wang, Wei Zhou, Pengxiang Lan, Haoran Xiang, Junlin Zhu, and Meng Yuan. "Conversational recommender based on graph sparsification and multi-hop attention." Intelligent Data Analysis, September 14, 2023, 1–21. http://dx.doi.org/10.3233/ida-230148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Conversational recommender systems provide users with item recommendations via interactive dialogues. Existing methods using graph neural networks have been proven to be an adequate representation of the learning framework for knowledge graphs. However, the knowledge graph involved in the dialogue context is vast and noisy, especially the noise graph nodes, which restrict the primary node’s aggregation to neighbor nodes. In addition, although the recurrent neural network can encode the local structure of word sequences in a dialogue context, it may still be challenging to remember long-term dependencies. To tackle these problems, we propose a sparse multi-hop conversational recommender model named SMCR, which accurately identifies important edges through matching items, thus reducing the computational complexity of sparse graphs. Specifically, we design a multi-hop attention network to encode dialogue context, which can quickly encode the long dialogue sequences to capture the long-term dependencies. Furthermore, we utilize a variational auto-encoder to learn topic information for capturing syntactic dependencies. Extensive experiments on the travel dialogue dataset show significant improvements in our proposed model over the state-of-the-art methods in evaluating recommendation and dialogue generation.
45

Li, Yunyi, Yongjing Hao, Pengpeng Zhao, Guanfeng Liu, Yanchi Liu, Victor S. Sheng, and Xiaofang Zhou. "Edge-Enhanced Global Disentangled Graph Neural Network for Sequential Recommendation." ACM Transactions on Knowledge Discovery from Data, February 6, 2023. http://dx.doi.org/10.1145/3577928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Sequential recommendation has been a widely popular topic of recommender systems. Existing works have contributed to enhancing the prediction ability of sequential recommendation systems based on various methods, such as recurrent networks and self-attention mechanisms. However, they fail to discover and distinguish various relationships between items, which could be underlying factors which motivate user behaviors. In this paper, we propose an Edge-Enhanced Global Disentangled Graph Neural Network (EGD-GNN) model to capture the relation information between items for global item representation and local user intention learning. At the global level, we build a global-link graph over all sequences to model item relationships. Then a channel-aware disentangled learning layer is designed to decompose edge information into different channels, which can be aggregated to represent the target item from its neighbors. At the local level, we apply a variational auto-encoder framework to learn user intention over the current sequence. We evaluate our proposed method on three real-world datasets. Experimental results show that our model can get a crucial improvement over state-of-the-art baselines and is able to distinguish item features.
46

Peng, Lihong, Liangliang Huang, Qiongli Su, Geng Tian, Min Chen, and Guosheng Han. "LDA-VGHB: identifying potential lncRNA–disease associations with singular value decomposition, variational graph auto-encoder and heterogeneous Newton boosting machine." Briefings in Bioinformatics 25, no. 1 (November 22, 2023). http://dx.doi.org/10.1093/bib/bbad466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Long noncoding RNAs (lncRNAs) participate in various biological processes and have close linkages with diseases. In vivo and in vitro experiments have validated many associations between lncRNAs and diseases. However, biological experiments are time-consuming and expensive. Here, we introduce LDA-VGHB, an lncRNA–disease association (LDA) identification framework, by incorporating feature extraction based on singular value decomposition and variational graph autoencoder and LDA classification based on heterogeneous Newton boosting machine. LDA-VGHB was compared with four classical LDA prediction methods (i.e. SDLDA, LDNFSGB, IPCARF and LDASR) and four popular boosting models (XGBoost, AdaBoost, CatBoost and LightGBM) under 5-fold cross-validations on lncRNAs, diseases, lncRNA–disease pairs and independent lncRNAs and independent diseases, respectively. It greatly outperformed the other methods with its prominent performance under four different cross-validations on the lncRNADisease and MNDR databases. We further investigated potential lncRNAs for lung cancer, breast cancer, colorectal cancer and kidney neoplasms and inferred the top 20 lncRNAs associated with them among all their unobserved lncRNAs. The results showed that most of the predicted top 20 lncRNAs have been verified by biomedical experiments provided by the Lnc2Cancer 3.0, lncRNADisease v2.0 and RNADisease databases as well as publications. We found that HAR1A, KCNQ1DN, ZFAT-AS1 and HAR1B could associate with lung cancer, breast cancer, colorectal cancer and kidney neoplasms, respectively. The results need further biological experimental validation. We foresee that LDA-VGHB was capable of identifying possible lncRNAs for complex diseases. LDA-VGHB is publicly available at https://github.com/plhhnu/LDA-VGHB.
47

Bhavna, Km, Azman Akhter, Romi Banerjee, and Dipanjan Roy. "Explainable deep-learning framework: decoding brain states and prediction of individual performance in false-belief task at early childhood stage." Frontiers in Neuroinformatics 18 (June 28, 2024). http://dx.doi.org/10.3389/fninf.2024.1392661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Decoding of cognitive states aims to identify individuals' brain states and brain fingerprints to predict behavior. Deep learning provides an important platform for analyzing brain signals at different developmental stages to understand brain dynamics. Due to their internal architecture and feature extraction techniques, existing machine-learning and deep-learning approaches are suffering from low classification performance and explainability issues that must be improved. In the current study, we hypothesized that even at the early childhood stage (as early as 3-years), connectivity between brain regions could decode brain states and predict behavioral performance in false-belief tasks. To this end, we proposed an explainable deep learning framework to decode brain states (Theory of Mind and Pain states) and predict individual performance on ToM-related false-belief tasks in a developmental dataset. We proposed an explainable spatiotemporal connectivity-based Graph Convolutional Neural Network (Ex-stGCNN) model for decoding brain states. Here, we consider a developmental dataset, N = 155 (122 children; 3–12 yrs and 33 adults; 18–39 yrs), in which participants watched a short, soundless animated movie, shown to activate Theory-of-Mind (ToM) and pain networs. After scanning, the participants underwent a ToM-related false-belief task, leading to categorization into the pass, fail, and inconsistent groups based on performance. We trained our proposed model using Functional Connectivity (FC) and Inter-Subject Functional Correlations (ISFC) matrices separately. We observed that the stimulus-driven feature set (ISFC) could capture ToM and Pain brain states more accurately with an average accuracy of 94%, whereas it achieved 85% accuracy using FC matrices. We also validated our results using five-fold cross-validation and achieved an average accuracy of 92%. Besides this study, we applied the SHapley Additive exPlanations (SHAP) approach to identify brain fingerprints that contributed the most to predictions. We hypothesized that ToM network brain connectivity could predict individual performance on false-belief tasks. We proposed an Explainable Convolutional Variational Auto-Encoder (Ex-Convolutional VAE) model to predict individual performance on false-belief tasks and trained the model using FC and ISFC matrices separately. ISFC matrices again outperformed the FC matrices in prediction of individual performance. We achieved 93.5% accuracy with an F1-score of 0.94 using ISFC matrices and achieved 90% accuracy with an F1-score of 0.91 using FC matrices.

To the bibliography