Siga este enlace para ver otros tipos de publicaciones sobre el tema: Learning on graphs.

Artículos de revistas sobre el tema "Learning on graphs"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Learning on graphs".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Huang, Xueqin, Xianqiang Zhu, Xiang Xu, Qianzhen Zhang y Ailin Liang. "Parallel Learning of Dynamics in Complex Systems". Systems 10, n.º 6 (15 de diciembre de 2022): 259. http://dx.doi.org/10.3390/systems10060259.

Texto completo
Resumen
Dynamics always exist in complex systems. Graphs (complex networks) are a mathematical form for describing a complex system abstractly. Dynamics can be learned efficiently from the structure and dynamics state of a graph. Learning the dynamics in graphs plays an important role in predicting and controlling complex systems. Most of the methods for learning dynamics in graphs run slowly in large graphs. The complexity of the large graph’s structure and its nonlinear dynamics aggravate this problem. To overcome these difficulties, we propose a general framework with two novel methods in this paper, the Dynamics-METIS (D-METIS) and the Partitioned Graph Neural Dynamics Learner (PGNDL). The general framework combines D-METIS and PGNDL to perform tasks for large graphs. D-METIS is a new algorithm that can partition a large graph into multiple subgraphs. D-METIS innovatively considers the dynamic changes in the graph. PGNDL is a new parallel model that consists of ordinary differential equation systems and graph neural networks (GNNs). It can quickly learn the dynamics of subgraphs in parallel. In this framework, D-METIS provides PGNDL with partitioned subgraphs, and PGNDL can solve the tasks of interpolation and extrapolation prediction. We exhibit the universality and superiority of our framework on four kinds of graphs with three kinds of dynamics through an experiment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zeng, Jiaqi y Pengtao Xie. "Contrastive Self-supervised Learning for Graph Classification". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de mayo de 2021): 10824–32. http://dx.doi.org/10.1609/aaai.v35i12.17293.

Texto completo
Resumen
Graph classification is a widely studied problem and has broad applications. In many real-world problems, the number of labeled graphs available for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose two approaches based on contrastive self-supervised learning (CSSL) to alleviate overfitting. In the first approach, we use CSSL to pretrain graph encoders on widely-available unlabeled graphs without relying on human-provided labels, then finetune the pretrained encoders on labeled graphs. In the second approach, we develop a regularizer based on CSSL, and solve the supervised classification task and the unsupervised CSSL task simultaneously. To perform CSSL on graphs, given a collection of original graphs, we perform data augmentation to create augmented graphs out of the original graphs. An augmented graph is created by consecutively applying a sequence of graph alteration operations. A contrastive loss is defined to learn graph encoders by judging whether two augmented graphs are from the same original graph. Experiments on various graph classification datasets demonstrate the effectiveness of our proposed methods. The code is available at https://github.com/UCSD-AI4H/GraphSSL.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Fionda, Valeria y Giuseppe Pirrò. "Learning Triple Embeddings from Knowledge Graphs". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 3874–81. http://dx.doi.org/10.1609/aaai.v34i04.5800.

Texto completo
Resumen
Graph embedding techniques allow to learn high-quality feature vectors from graph structures and are useful in a variety of tasks, from node classification to clustering. Existing approaches have only focused on learning feature vectors for the nodes and predicates in a knowledge graph. To the best of our knowledge, none of them has tackled the problem of directly learning triple embeddings. The approaches that are closer to this task have focused on homogeneous graphs involving only one type of edge and obtain edge embeddings by applying some operation (e.g., average) on the embeddings of the endpoint nodes. The goal of this paper is to introduce Triple2Vec, a new technique to directly embed knowledge graph triples. We leverage the idea of line graph of a graph and extend it to the context of knowledge graphs. We introduce an edge weighting mechanism for the line graph based on semantic proximity. Embeddings are finally generated by adopting the SkipGram model, where sentences are replaced with graph walks. We evaluate our approach on different real-world knowledge graphs and compared it with related work. We also show an application of triple embeddings in the context of user-item recommendations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zainullina, R. "Automatic Graph Generation for E-learning Systems". Bulletin of Science and Practice 7, n.º 6 (15 de junio de 2021): 12–16. http://dx.doi.org/10.33619/2414-2948/67/01.

Texto completo
Resumen
The subject of the research is one of the ways of updating modern training systems for solving problems of graph theory, namely, automatic generation of graphs. This approach will reduce the load on the training system database and generate tasks for the user in real-time without updating the bank of tasks. In the course of the work, the advantages and disadvantages of this approach were identified. The most suitable method for the implementation of the research was chosen to represent graphs in electronic computers. The requirements for generated graphs and possible ways of implementing these requirements are identified and substantiated. Namely: in the implemented program, simple connected undirected graphs will be generated. We considered an important detail in working with graphs — graph traversal using the “Depth (width) search” algorithm, which in this task is used to check the graph for connectivity. The result of the work is presented — a software implementation of the graph generation algorithm in the C# programming language. In it, graphs are represented by an adjacency list, generated randomly, and checked for connectivity using the DFS (Depth First Search) function. DFS is a software implementation of the Depth First Search algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hu, Shengze, Weixin Zeng, Pengfei Zhang y Jiuyang Tang. "Neural Graph Similarity Computation with Contrastive Learning". Applied Sciences 12, n.º 15 (29 de julio de 2022): 7668. http://dx.doi.org/10.3390/app12157668.

Texto completo
Resumen
Computing the similarity between graphs is a longstanding and challenging problem with many real-world applications. Recent years have witnessed a rapid increase in neural-network-based methods, which project graphs into embedding space and devise end-to-end frameworks to learn to estimate graph similarity. Nevertheless, these solutions usually design complicated networks to capture the fine-grained interactions between graphs, and hence have low efficiency. Additionally, they rely on labeled data for training the neural networks and overlook the useful information hidden in the graphs themselves. To address the aforementioned issues, in this work, we put forward a contrastive neural graph similarity learning framework, Conga. Specifically, we utilize vanilla graph convolutional networks to generate the graph representations and capture the cross-graph interactions via a simple multilayer perceptron. We further devise an unsupervised contrastive loss to discriminate the graph embeddings and guide the training process by learning more expressive entity representations. Extensive experiment results on public datasets validate that our proposal has more robust performance and higher efficiency compared with state-of-the-art methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Xiang, Xintao, Tiancheng Huang y Donglin Wang. "Learning to Evolve on Dynamic Graphs (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junio de 2022): 13091–92. http://dx.doi.org/10.1609/aaai.v36i11.21682.

Texto completo
Resumen
Representation learning in dynamic graphs is a challenging problem because the topology of graph and node features vary at different time. This requires the model to be able to effectively capture both graph topology information and temporal information. Most existing works are built on recurrent neural networks (RNNs), which are used to exact temporal information of dynamic graphs, and thus they inherit the same drawbacks of RNNs. In this paper, we propose Learning to Evolve on Dynamic Graphs (LEDG) - a novel algorithm that jointly learns graph information and time information. Specifically, our approach utilizes gradient-based meta-learning to learn updating strategies that have better generalization ability than RNN on snapshots. It is model-agnostic and thus can train any message passing based graph neural network (GNN) on dynamic graphs. To enhance the representation power, we disentangle the embeddings into time embeddings and graph intrinsic embeddings. We conduct experiments on various datasets and down-stream tasks, and the experimental results validate the effectiveness of our method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Kim, Jaehyeon, Sejong Lee, Yushin Kim, Seyoung Ahn y Sunghyun Cho. "Graph Learning-Based Blockchain Phishing Account Detection with a Heterogeneous Transaction Graph". Sensors 23, n.º 1 (1 de enero de 2023): 463. http://dx.doi.org/10.3390/s23010463.

Texto completo
Resumen
Recently, cybercrimes that exploit the anonymity of blockchain are increasing. They steal blockchain users’ assets, threaten the network’s reliability, and destabilize the blockchain network. Therefore, it is necessary to detect blockchain cybercriminal accounts to protect users’ assets and sustain the blockchain ecosystem. Many studies have been conducted to detect cybercriminal accounts in the blockchain network. They represented blockchain transaction records as homogeneous transaction graphs that have a multi-edge. They also adopted graph learning algorithms to analyze transaction graphs. However, most graph learning algorithms are not efficient in multi-edge graphs, and homogeneous graphs ignore the heterogeneity of the blockchain network. In this paper, we propose a novel heterogeneous graph structure called an account-transaction graph, ATGraph. ATGraph represents a multi-edge as single edges by considering transactions as nodes. It allows graph learning more efficiently by eliminating multi-edges. Moreover, we compare the performance of ATGraph with homogeneous transaction graphs in various graph learning algorithms. The experimental results demonstrate that the detection performance using ATGraph as input outperforms that using homogeneous graphs as the input by up to 0.2 AUROC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ma, Yunpu y Volker Tresp. "Quantum Machine Learning Algorithm for Knowledge Graphs". ACM Transactions on Quantum Computing 2, n.º 3 (30 de septiembre de 2021): 1–28. http://dx.doi.org/10.1145/3467982.

Texto completo
Resumen
Semantic knowledge graphs are large-scale triple-oriented databases for knowledge representation and reasoning. Implicit knowledge can be inferred by modeling the tensor representations generated from knowledge graphs. However, as the sizes of knowledge graphs continue to grow, classical modeling becomes increasingly computationally resource intensive. This article investigates how to capitalize on quantum resources to accelerate the modeling of knowledge graphs. In particular, we propose the first quantum machine learning algorithm for inference on tensorized data, i.e., on knowledge graphs. Since most tensor problems are NP-hard [18], it is challenging to devise quantum algorithms to support the inference task. We simplify the modeling task by making the plausible assumption that the tensor representation of a knowledge graph can be approximated by its low-rank tensor singular value decomposition, which is verified by our experiments. The proposed sampling-based quantum algorithm achieves speedup with a polylogarithmic runtime in the dimension of knowledge graph tensor.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ma, Guixiang, Nesreen K. Ahmed, Theodore L. Willke y Philip S. Yu. "Deep graph similarity learning: a survey". Data Mining and Knowledge Discovery 35, n.º 3 (24 de marzo de 2021): 688–725. http://dx.doi.org/10.1007/s10618-020-00733-5.

Texto completo
Resumen
AbstractIn many domains where data are represented as graphs, learning a similarity metric among graphs is considered a key problem, which can further facilitate various learning tasks, such as classification, clustering, and similarity search. Recently, there has been an increasing interest in deep graph similarity learning, where the key idea is to learn a deep learning model that maps input graphs to a target space such that the distance in the target space approximates the structural distance in the input space. Here, we provide a comprehensive review of the existing literature of deep graph similarity learning. We propose a systematic taxonomy for the methods and applications. Finally, we discuss the challenges and future directions for this problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Lee, Namkyeong, Junseok Lee y Chanyoung Park. "Augmentation-Free Self-Supervised Learning on Graphs". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junio de 2022): 7372–80. http://dx.doi.org/10.1609/aaai.v36i7.20700.

Texto completo
Resumen
Inspired by the recent success of self-supervised methods applied on images, self-supervised learning on graph structured data has seen rapid growth especially centered on augmentation-based contrastive methods. However, we argue that without carefully designed augmentation techniques, augmentations on graphs may behave arbitrarily in that the underlying semantics of graphs can drastically change. As a consequence, the performance of existing augmentation-based methods is highly dependent on the choice of augmentation scheme, i.e., augmentation hyperparameters and combinations of augmentation. In this paper, we propose a novel augmentation-free self-supervised learning framework for graphs, named AFGRL. Specifically, we generate an alternative view of a graph by discovering nodes that share the local structural information and the global semantics with the graph. Extensive experiments towards various node-level tasks, i.e., node classification, clustering, and similarity search on various real-world datasets demonstrate the superiority of AFGRL. The source code for AFGRL is available at https://github.com/Namkyeong/AFGRL.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Zhang, Tong, Yun Wang, Zhen Cui, Chuanwei Zhou, Baoliang Cui, Haikuan Huang y Jian Yang. "Deep Wasserstein Graph Discriminant Learning for Graph Classification". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de mayo de 2021): 10914–22. http://dx.doi.org/10.1609/aaai.v35i12.17303.

Texto completo
Resumen
Graph topological structures are crucial to distinguish different-class graphs. In this work, we propose a deep Wasserstein graph discriminant learning (WGDL) framework to learn discriminative embeddings of graphs in Wasserstein-metric (W-metric) matching space. In order to bypass the calculation of W-metric class centers in discriminant analysis, as well as better support batch process learning, we introduce a reference set of graphs (aka graph dictionary) to express those representative graph samples (aka dictionary keys). On the bridge of graph dictionary, every input graph can be projected into the latent dictionary space through our proposed Wasserstein graph transformation (WGT). In WGT, we formulate inter-graph distance in W-metric space by virtue of the optimal transport (OT) principle, which effectively expresses the correlations of cross-graph structures. To make WGDL better representation ability, we dynamically update graph dictionary during training by maximizing the ratio of inter-class versus intra-class Wasserstein distance. To evaluate our WGDL method, comprehensive experiments are conducted on six graph classification datasets. Experimental results demonstrate the effectiveness of our WGDL, and state-of-the-art performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Christensen, Andrew J., Ananya Sen Gupta y Ivars Kirsteins. "Graph representation learning on braid manifolds". Journal of the Acoustical Society of America 152, n.º 4 (octubre de 2022): A39. http://dx.doi.org/10.1121/10.0015466.

Texto completo
Resumen
The accuracy of autonomous sonar target recognition systems is usually hindered by morphing target features, unknown target geometry, and uncertainty caused by waveguide distortions to signal. Common “black-box” neural networks are not effective in addressing these challenges since they do not produce physically interpretable features. This work seeks to use recent advancements in machine learning to extract braid features that can be interpreted by a domain expert. We utilize Graph Neural Networks (GNNs) to discover braid manifolds in sonar ping spectra data. This approach represents the sonar ping data as a sequence of timestamped, sparse, dynamic graphs. These dynamic graph sequences are used as input into a GNN to produce feature dictionaries. GNNs ability to learn on complex systems of interactions help make them resilient to environmental uncertainty. To learn the evolving braid-like features of the sonar ping spectra graphs, a modified variation of Temporal Graph Networks (TGNs) is used. TGNs can perform prediction and classification tasks on timestamped dynamic graphs. The modified TGN in this work models the evolution of the sonar ping spectra graph to eventually perform graph-based classification. [Work supported by ONR grant N00014-21-1-2420.]
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

James, Jinto, K. A. Germina y P. Shaini. "Learning graphs and 1-uniform dcsl graphs". Discrete Mathematics, Algorithms and Applications 09, n.º 04 (agosto de 2017): 1750046. http://dx.doi.org/10.1142/s179383091750046x.

Texto completo
Resumen
A distance compatible set labeling (dcsl) of a connected graph [Formula: see text] is an injective set assignment [Formula: see text] [Formula: see text] being a non-empty ground set, such that the corresponding induced function [Formula: see text] given by [Formula: see text] satisfies [Formula: see text] for every pair of distinct vertices [Formula: see text] where [Formula: see text] denotes the path distance between [Formula: see text] and [Formula: see text] and [Formula: see text] is a constant, not necessarily an integer, depending on the pair of vertices [Formula: see text] chosen. A dcsl [Formula: see text] of [Formula: see text] is [Formula: see text]-uniform if all the constants of proportionality with respect to [Formula: see text] are equal to [Formula: see text] and if [Formula: see text] admits such a dcsl then [Formula: see text] is called a [Formula: see text]-uniform dcsl graph. The family [Formula: see text] is well-graded family, if there is a tight path between any two of its distinct sets. A learning graph is an [Formula: see text]-induced graph of a learning space. In this paper, we initiate a study on subgraphs of 1-uniform graphs which lead to the study of Knowledge Structures, Learning Spaces and Union-closed conjecture using graph theory techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Zhuang, Jun y Mohammad Al Hasan. "Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-Supervision". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 4 (28 de junio de 2022): 4405–13. http://dx.doi.org/10.1609/aaai.v36i4.20362.

Texto completo
Resumen
In recent years, plentiful evidence illustrates that Graph Convolutional Networks (GCNs) achieve extraordinary accomplishments on the node classification task. However, GCNs may be vulnerable to adversarial attacks on label-scarce dynamic graphs. Many existing works aim to strengthen the robustness of GCNs; for instance, adversarial training is used to shield GCNs against malicious perturbations. However, these works fail on dynamic graphs for which label scarcity is a pressing issue. To overcome label scarcity, self-training attempts to iteratively assign pseudo-labels to highly confident unlabeled nodes but such attempts may suffer serious degradation under dynamic graph perturbations. In this paper, we generalize noisy supervision as a kind of self-supervised learning method and then propose a novel Bayesian self-supervision model, namely GraphSS, to address the issue. Extensive experiments demonstrate that GraphSS can not only affirmatively alert the perturbations on dynamic graphs but also effectively recover the prediction of a node classifier when the graph is under such perturbations. These two advantages prove to be generalized over three classic GCNs across five public graph datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Gerofsky, Susan. "Mathematical learning and gesture". Gesture and Multimodal Development 10, n.º 2-3 (31 de diciembre de 2010): 321–43. http://dx.doi.org/10.1075/gest.10.2-3.10ger.

Texto completo
Resumen
This paper reports on a research project in mathematics education involving the use of gesture, movement and vocal sound to highlight mathematically salient features of the graphs of polynomial functions. Empirical observations of students’ spontaneous gesture types when enacting elicited gestures of these graphs reveal a number of useful binaries (proximal/distal, being the graph/seeing the graph, within sight/within reach). These binaries inform an analysis of videotaped gestural and interview data and appear to predict teachers’ assessments of student mathematical engagement and understanding with great accuracy. Reframing this data in terms of C-VPT and O-VPT adds a further layer of sophistication to the analysis and connects it with deeper findings in cognitive and neuroscience and gesture studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Bahrami, Saeedeh, Alireza Bosaghzadeh y Fadi Dornaika. "Multi Similarity Metric Fusion in Graph-Based Semi-Supervised Learning". Computation 7, n.º 1 (7 de marzo de 2019): 15. http://dx.doi.org/10.3390/computation7010015.

Texto completo
Resumen
In semi-supervised label propagation (LP), the data manifold is approximated by a graph, which is considered as a similarity metric. Graph estimation is a crucial task, as it affects the further processes applied on the graph (e.g., LP, classification). As our knowledge of data is limited, a single approximation cannot easily find the appropriate graph, so in line with this, multiple graphs are constructed. Recently, multi-metric fusion techniques have been used to construct more accurate graphs which better represent the data manifold and, hence, improve the performance of LP. However, most of these algorithms disregard use of the information of label space in the LP process. In this article, we propose a new multi-metric graph-fusion method, based on the Flexible Manifold Embedding algorithm. Our proposed method represents a unified framework that merges two phases: graph fusion and LP. Based on one available view, different simple graphs were efficiently generated and used as input to our proposed fusion approach. Moreover, our method incorporated the label space information as a new form of graph, namely the Correlation Graph, with other similarity graphs. Furthermore, it updated the correlation graph to find a better representation of the data manifold. Our experimental results on four face datasets in face recognition demonstrated the superiority of the proposed method compared to other state-of-the-art algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Wald, Johanna, Nassir Navab y Federico Tombari. "Learning 3D Semantic Scene Graphs with Instance Embeddings". International Journal of Computer Vision 130, n.º 3 (22 de enero de 2022): 630–51. http://dx.doi.org/10.1007/s11263-021-01546-9.

Texto completo
Resumen
AbstractA 3D scene is more than the geometry and classes of the objects it comprises. An essential aspect beyond object-level perception is the scene context, described as a dense semantic network of interconnected nodes. Scene graphs have become a common representation to encode the semantic richness of images, where nodes in the graph are object entities connected by edges, so-called relationships. Such graphs have been shown to be useful in achieving state-of-the-art performance in image captioning, visual question answering and image generation or editing. While scene graph prediction methods so far focused on images, we propose instead a novel neural network architecture for 3D data, where the aim is to learn to regress semantic graphs from a given 3D scene. With this work, we go beyond object-level perception, by exploring relations between object entities. Our method learns instance embeddings alongside a scene segmentation and is able to predict semantics for object nodes and edges. We leverage 3DSSG, a large scale dataset based on 3RScan that features scene graphs of changing 3D scenes. Finally, we show the effectiveness of graphs as an intermediate representation on a retrieval task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

CUCKA, PETER, NATHAN S. NETANYAHU y AZRIEL ROSENFELD. "LEARNING IN NAVIGATION: GOAL FINDING IN GRAPHS". International Journal of Pattern Recognition and Artificial Intelligence 10, n.º 05 (agosto de 1996): 429–46. http://dx.doi.org/10.1142/s0218001496000281.

Texto completo
Resumen
A robotic agent operating in an unknown and complex environment may employ a search strategy of some kind to perform a navigational task such as reaching a given goal. In the process of performing the task, the agent can attempt to discover characteristics of its environment that enable it to choose a more efficient search strategy for that environment. If the agent is able to do this, we can say that it has "learned to navigate" — i.e., to improve its navigational performance. This paper describes how an agent can learn to improve its goal-finding performance in a class of discrete spaces, represented by graphs embedded in the plane. We compare several basic search strategies on two different classes of "random" graphs and show how information collected during the traversal of a graph can be used to classify the graph, thus allowing the agent to choose the search strategy best suited for that graph.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Bai, Yunsheng, Hao Ding, Ken Gu, Yizhou Sun y Wei Wang. "Learning-Based Efficient Graph Similarity Computation via Multi-Scale Convolutional Set Matching". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 3219–26. http://dx.doi.org/10.1609/aaai.v34i04.5720.

Texto completo
Resumen
Graph similarity computation is one of the core operations in many graph-based applications, such as graph similarity search, graph database analysis, graph clustering, etc. Since computing the exact distance/similarity between two graphs is typically NP-hard, a series of approximate methods have been proposed with a trade-off between accuracy and speed. Recently, several data-driven approaches based on neural networks have been proposed, most of which model the graph-graph similarity as the inner product of their graph-level representations, with different techniques proposed for generating one embedding per graph. However, using one fixed-dimensional embedding per graph may fail to fully capture graphs in varying sizes and link structures—a limitation that is especially problematic for the task of graph similarity computation, where the goal is to find the fine-grained difference between two graphs. In this paper, we address the problem of graph similarity computation from another perspective, by directly matching two sets of node embeddings without the need to use fixed-dimensional vectors to represent whole graphs for their similarity computation. The model, Graph-Sim, achieves the state-of-the-art performance on four real-world graph datasets under six out of eight settings (here we count a specific dataset and metric combination as one setting), compared to existing popular methods for approximate Graph Edit Distance (GED) and Maximum Common Subgraph (MCS) computation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

GIBERT, JAUME, ERNEST VALVENY y HORST BUNKE. "EMBEDDING OF GRAPHS WITH DISCRETE ATTRIBUTES VIA LABEL FREQUENCIES". International Journal of Pattern Recognition and Artificial Intelligence 27, n.º 03 (mayo de 2013): 1360002. http://dx.doi.org/10.1142/s0218001413600021.

Texto completo
Resumen
Graph-based representations of patterns are very flexible and powerful, but they are not easily processed due to the lack of learning algorithms in the domain of graphs. Embedding a graph into a vector space solves this problem since graphs are turned into feature vectors and thus all the statistical learning machinery becomes available for graph input patterns. In this work we present a new way of embedding discrete attributed graphs into vector spaces using node and edge label frequencies. The methodology is experimentally tested on graph classification problems, using patterns of different nature, and it is shown to be competitive to state-of-the-art classification algorithms for graphs, while being computationally much more efficient.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Jung, Jinhong, Jaemin Yoo y U. Kang. "Signed random walk diffusion for effective representation learning in signed graphs". PLOS ONE 17, n.º 3 (17 de marzo de 2022): e0265001. http://dx.doi.org/10.1371/journal.pone.0265001.

Texto completo
Resumen
How can we model node representations to accurately infer the signs of missing edges in a signed social graph? Signed social graphs have attracted considerable attention to model trust relationships between people. Various representation learning methods such as network embedding and graph convolutional network (GCN) have been proposed to analyze signed graphs. However, existing network embedding models are not end-to-end for a specific task, and GCN-based models exhibit a performance degradation issue when their depth increases. In this paper, we propose Signed Diffusion Network (SidNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs. We propose a new random walk based feature aggregation, which is specially designed for signed graphs, so that SidNet effectively diffuses hidden node features and uses more information from neighboring nodes. Through extensive experiments, we show that SidNet significantly outperforms state-of-the-art models in terms of link sign prediction accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Javad Hosseini, Mohammad, Nathanael Chambers, Siva Reddy, Xavier R. Holt, Shay B. Cohen, Mark Johnson y Mark Steedman. "Learning Typed Entailment Graphs with Global Soft Constraints". Transactions of the Association for Computational Linguistics 6 (diciembre de 2018): 703–17. http://dx.doi.org/10.1162/tacl_a_00250.

Texto completo
Resumen
This paper presents a new method for learning typed entailment graphs from text. We extract predicate-argument structures from multiple-source news corpora, and compute local distributional similarity scores to learn entailments between predicates with typed arguments (e.g., person contracted disease). Previous work has used transitivity constraints to improve local decisions, but these constraints are intractable on large graphs. We instead propose a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph. Learning takes only a few hours to run over 100K predicates and our results show large improvements over local similarity scores on two entailment data sets. We further show improvements over paraphrases and entailments from the Paraphrase Database, and prior state-of-the-art entailment graphs. We show that the entailment graphs improve performance in a downstream task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Balcı, Mehmet Ali, Ömer Akgüller, Larissa M. Batrancea y Lucian Gaban. "Discrete Geodesic Distribution-Based Graph Kernel for 3D Point Clouds". Sensors 23, n.º 5 (21 de febrero de 2023): 2398. http://dx.doi.org/10.3390/s23052398.

Texto completo
Resumen
In the structural analysis of discrete geometric data, graph kernels have a great track record of performance. Using graph kernel functions provides two significant advantages. First, a graph kernel is capable of preserving the graph’s topological structures by describing graph properties in a high-dimensional space. Second, graph kernels allow the application of machine learning methods to vector data that are rapidly evolving into graphs. In this paper, the unique kernel function for similarity determination procedures of point cloud data structures, which are crucial for several applications, is formulated. This function is determined by the proximity of the geodesic route distributions in graphs reflecting the discrete geometry underlying the point cloud. This research demonstrates the efficiency of this unique kernel for similarity measures and the categorization of point clouds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Li, Shuai, Wei Chen, Zheng Wen y Kwong-Sak Leung. "Stochastic Online Learning with Probabilistic Graph Feedback". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 4675–82. http://dx.doi.org/10.1609/aaai.v34i04.5899.

Texto completo
Resumen
We consider a problem of stochastic online learning with general probabilistic graph feedback, where each directed edge in the feedback graph has probability pij. Two cases are covered. (a) The one-step case, where after playing arm i the learner observes a sample reward feedback of arm j with independent probability pij. (b) The cascade case where after playing arm i the learner observes feedback of all arms j in a probabilistic cascade starting from i – for each (i,j) with probability pij, if arm i is played or observed, then a reward sample of arm j would be observed with independent probability pij. Previous works mainly focus on deterministic graphs which corresponds to one-step case with pij ∈ {0,1}, an adversarial sequence of graphs with certain topology guarantees, or a specific type of random graphs. We analyze the asymptotic lower bounds and design algorithms in both cases. The regret upper bounds of the algorithms match the lower bounds with high probability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Makarov, Ilya, Dmitrii Kiselev, Nikita Nikitinsky y Lovro Subelj. "Survey on graph embeddings and their applications to machine learning problems on graphs". PeerJ Computer Science 7 (4 de febrero de 2021): e357. http://dx.doi.org/10.7717/peerj-cs.357.

Texto completo
Resumen
Dealing with relational data always required significant computational resources, domain expertise and task-dependent feature engineering to incorporate structural information into a predictive model. Nowadays, a family of automated graph feature engineering techniques has been proposed in different streams of literature. So-called graph embeddings provide a powerful tool to construct vectorized feature spaces for graphs and their components, such as nodes, edges and subgraphs under preserving inner graph properties. Using the constructed feature spaces, many machine learning problems on graphs can be solved via standard frameworks suitable for vectorized feature representation. Our survey aims to describe the core concepts of graph embeddings and provide several taxonomies for their description. First, we start with the methodological approach and extract three types of graph embedding models based on matrix factorization, random-walks and deep learning approaches. Next, we describe how different types of networks impact the ability of models to incorporate structural and attributed data into a unified embedding. Going further, we perform a thorough evaluation of graph embedding applications to machine learning problems on graphs, among which are node classification, link prediction, clustering, visualization, compression, and a family of the whole graph embedding algorithms suitable for graph classification, similarity and alignment problems. Finally, we overview the existing applications of graph embeddings to computer science domains, formulate open problems and provide experiment results, explaining how different networks properties result in graph embeddings quality in the four classic machine learning problems on graphs, such as node classification, link prediction, clustering and graph visualization. As a result, our survey covers a new rapidly growing field of network feature engineering, presents an in-depth analysis of models based on network types, and overviews a wide range of applications to machine learning problems on graphs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Yao, Huaxiu, Chuxu Zhang, Ying Wei, Meng Jiang, Suhang Wang, Junzhou Huang, Nitesh Chawla y Zhenhui Li. "Graph Few-Shot Learning via Knowledge Transfer". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 6656–63. http://dx.doi.org/10.1609/aaai.v34i04.6142.

Texto completo
Resumen
Towards the challenging problem of semi-supervised node classification, there have been extensive studies. As a frontier, Graph Neural Networks (GNNs) have aroused great interest recently, which update the representation of each node by aggregating information of its neighbors. However, most GNNs have shallow layers with a limited receptive field and may not achieve satisfactory performance especially when the number of labeled nodes is quite small. To address this challenge, we innovatively propose a graph few-shot learning (GFL) algorithm that incorporates prior knowledge learned from auxiliary graphs to improve classification accuracy on the target graph. Specifically, a transferable metric space characterized by a node embedding and a graph-specific prototype embedding function is shared between auxiliary graphs and the target, facilitating the transfer of structural knowledge. Extensive experiments and ablation studies on four real-world graph datasets demonstrate the effectiveness of our proposed model and the contribution of each component.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Bera, Abhijit, Mrinal Kanti Ghose y Dibyendu Kumar Pal. "Graph Classification Using Back Propagation Learning Algorithms". International Journal of Systems and Software Security and Protection 11, n.º 2 (julio de 2020): 1–12. http://dx.doi.org/10.4018/ijsssp.2020070101.

Texto completo
Resumen
Due to the propagation of graph data, there has been a sharp focus on developing effective methods for classifying the graph object. As most of the proposed graph classification techniques though effective are constrained by high computational overhead, there is a consistent effort to improve upon the existing classification algorithms in terms of higher accuracy and less computational time. In this paper, an attempt has been made to classify graphs by extracting various features and selecting the important features using feature selection algorithms. Since all the extracted graph-based features need not be equally important, only the most important features are selected by using back propagation learning algorithm. The results of the proposed study of feature-based approach using back propagation learning algorithm lead to higher classification accuracy with faster computational time in comparison to other graph kernels. It also appears to be more effective for large unlabeled graphs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Abualkishik, Abedallah Z., Rasha Almajed y William Thompson. "Trustworthy Federated Graph Learning Framework for Wireless Internet of Things". International Journal of Wireless and Ad Hoc Communication 5, n.º 2 (2022): 48–63. http://dx.doi.org/10.54216/ijwac.050204.

Texto completo
Resumen
As computational power has increased rapidly in recent years, deep learning techniques have found widespread use in wireless internet of things (IoT) networks, where they have shown remarkable results. To make the most of the data contained in graphs and their surrounding contexts, Graph INTELLIGENCEs have seen extensive use in a wide variety of tailored wireless applications. However, the sensitive nature of client data poses serious challenges to conventional customization approaches, which depend on centralised graph learning on globe graphs. In this work, we introduce a federated graph learning, dubbed FGL, that can produce accurate personalisation while still protecting clients' anonymity. To train GRAPH INTELLIGENCE models jointly based on distributed graphs inferred from local data, we employ a trustworthy model updating technique. To make use of graph knowledge beyond the scope of dynamic interplay, we present a trustworthy graph extension mechanism for incorporating high-level knowledge while yet maintaining confidentiality. Six customization datasets were used to show that with excellent trustworthy protection, FGL achieves 2.0% 5.0% lower errors than the state-of-the-art federated customization approaches. For ethical and insightful personalization, FGL offers a potential path forward for mining distributed graph data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Abualkishik, Abedallah Z., Rasha Almajed y William Thompson. "Trustworthy Federated Graph Learning Framework for Wireless Internet of Things". International Journal of Wireless and Ad Hoc Communication 6, n.º 1 (2023): 50–62. http://dx.doi.org/10.54216/ijwac.060105.

Texto completo
Resumen
As computational power has increased rapidly in recent years, deep learning techniques have found widespread use in wireless internet of things (IoT) networks, where they have shown remarkable results. In order to make the most of the data contained in graphs and their surrounding contexts, graph intelligence has seen extensive use in a wide variety of tailored wireless applications. However, the sensitive nature of client data poses serious challenges to conventional customization approaches, which depend on centralized graph learning on globe graphs. In this work, we introduce federated graph learning, dubbed FGL, that is capable of producing accurate personalization while still protecting clients' anonymity. To train graph intelligence models jointly based on distributed graphs inferred from local data, we employ a trustworthy model updating technique. In order to make use of graph knowledge beyond the scope of dynamic interplay, we present a trustworthy graph extension mechanism for incorporating high-level knowledge while yet maintaining confidentiality. Six customization datasets were used to show that with excellent trustworthy protection, FGL achieves 2.0% to 5.0% lower errors than the state-of-the-art federated customization approaches. For ethical and insightful personalization, FGL offers a potential path forward for mining distributed graph data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Ni, Xiang, Jing Li, Mo Yu, Wang Zhou y Kun-Lung Wu. "Generalizable Resource Allocation in Stream Processing via Deep Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 01 (3 de abril de 2020): 857–64. http://dx.doi.org/10.1609/aaai.v34i01.5431.

Texto completo
Resumen
This paper considers the problem of resource allocation in stream processing, where continuous data flows must be processed in real time in a large distributed system. To maximize system throughput, the resource allocation strategy that partitions the computation tasks of a stream processing graph onto computing devices must simultaneously balance workload distribution and minimize communication. Since this problem of graph partitioning is known to be NP-complete yet crucial to practical streaming systems, many heuristic-based algorithms have been developed to find reasonably good solutions. In this paper, we present a graph-aware encoder-decoder framework to learn a generalizable resource allocation strategy that can properly distribute computation tasks of stream processing graphs unobserved from training data. We, for the first time, propose to leverage graph embedding to learn the structural information of the stream processing graphs. Jointly trained with the graph-aware decoder using deep reinforcement learning, our approach can effectively find optimized solutions for unseen graphs. Our experiments show that the proposed model outperforms both METIS, a state-of-the-art graph partitioning algorithm, and an LSTM-based encoder-decoder model, in about 70% of the test cases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Guo, Zhijiang, Yan Zhang, Zhiyang Teng y Wei Lu. "Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning". Transactions of the Association for Computational Linguistics 7 (noviembre de 2019): 297–312. http://dx.doi.org/10.1162/tacl_a_00269.

Texto completo
Resumen
We focus on graph-to-sequence learning, which can be framed as transducing graph structures to sequences for text generation. To capture structural information associated with graphs, we investigate the problem of encoding graphs using graph convolutional networks (GCNs). Unlike various existing approaches where shallow architectures were used for capturing local structural information only, we introduce a dense connection strategy, proposing a novel Densely Connected Graph Convolutional Network (DCGCN). Such a deep architecture is able to integrate both local and non-local features to learn a better structural representation of a graph. Our model outperforms the state-of-the-art neural models significantly on AMR-to-text generation and syntax-based neural machine translation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Béthune, Louis, Yacouba Kaloga, Pierre Borgnat, Aurélien Garivier y Amaury Habrard. "Hierarchical and Unsupervised Graph Representation Learning with Loukas’s Coarsening". Algorithms 13, n.º 9 (21 de agosto de 2020): 206. http://dx.doi.org/10.3390/a13090206.

Texto completo
Resumen
We propose a novel algorithm for unsupervised graph representation learning with attributed graphs. It combines three advantages addressing some current limitations of the literature: (i) The model is inductive: it can embed new graphs without re-training in the presence of new data; (ii) The method takes into account both micro-structures and macro-structures by looking at the attributed graphs at different scales; (iii) The model is end-to-end differentiable: it is a building block that can be plugged into deep learning pipelines and allows for back-propagation. We show that combining a coarsening method having strong theoretical guarantees with mutual information maximization suffices to produce high quality embeddings. We evaluate them on classification tasks with common benchmarks of the literature. We show that our algorithm is competitive with state of the art among unsupervised graph representation learning methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Li, Yanying. "Characterizing the Minimal Essential Graphs of Maximal Ancestral Graphs". International Journal of Pattern Recognition and Artificial Intelligence 34, n.º 04 (5 de agosto de 2019): 2059009. http://dx.doi.org/10.1142/s0218001420590090.

Texto completo
Resumen
Learning ancestor graph is a typical NP-hard problem. We consider the problem to represent a Markov equivalence class of ancestral graphs with a compact representation. Firstly, the minimal essential graph is defined to represent the equivalent class of maximal ancestral graphs with the minimum number of invariant arrowheads. Then, an algorithm is proposed to learn the minimal essential graph of ancestral graphs based on the detection of minimal collider paths. It is the first algorithm to use necessary and sufficient conditions for Markov equivalence as a base to seek essential graphs. Finally, a set of orientation rules is presented to orient edge marks of a minimal essential graph. Theory analysis shows our algorithm is sound, and complete in the sense of recognizing all minimal collider paths in a given ancestral graph. And the experiment results show we can discover all invariant marks by these orientation rules.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Albergante, Luca, Evgeny Mirkes, Jonathan Bac, Huidong Chen, Alexis Martin, Louis Faure, Emmanuel Barillot, Luca Pinello, Alexander Gorban y Andrei Zinovyev. "Robust and Scalable Learning of Complex Intrinsic Dataset Geometry via ElPiGraph". Entropy 22, n.º 3 (4 de marzo de 2020): 296. http://dx.doi.org/10.3390/e22030296.

Texto completo
Resumen
Multidimensional datapoint clouds representing large datasets are frequently characterized by non-trivial low-dimensional geometry and topology which can be recovered by unsupervised machine learning approaches, in particular, by principal graphs. Principal graphs approximate the multivariate data by a graph injected into the data space with some constraints imposed on the node mapping. Here we present ElPiGraph, a scalable and robust method for constructing principal graphs. ElPiGraph exploits and further develops the concept of elastic energy, the topological graph grammar approach, and a gradient descent-like optimization of the graph topology. The method is able to withstand high levels of noise and is capable of approximating data point clouds via principal graph ensembles. This strategy can be used to estimate the statistical significance of complex data features and to summarize them into a single consensus principal graph. ElPiGraph deals efficiently with large datasets in various fields such as biology, where it can be used for example with single-cell transcriptomic or epigenomic datasets to infer gene expression dynamics and recover differentiation landscapes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Jaeger, Manfred, Jens D. Nielsen y Tomi Silander. "Learning probabilistic decision graphs". International Journal of Approximate Reasoning 42, n.º 1-2 (mayo de 2006): 84–100. http://dx.doi.org/10.1016/j.ijar.2005.10.006.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Thanou, Dorina, Xiaowen Dong, Daniel Kressner y Pascal Frossard. "Learning Heat Diffusion Graphs". IEEE Transactions on Signal and Information Processing over Networks 3, n.º 3 (septiembre de 2017): 484–99. http://dx.doi.org/10.1109/tsipn.2017.2731164.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Zhu, Jun. "Learning representations on graphs". National Science Review 5, n.º 1 (1 de enero de 2018): 21. http://dx.doi.org/10.1093/nsr/nwx147.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Madhawa, Kaushalya y Tsuyoshi Murata. "Active Learning for Node Classification: An Evaluation". Entropy 22, n.º 10 (16 de octubre de 2020): 1164. http://dx.doi.org/10.3390/e22101164.

Texto completo
Resumen
Current breakthroughs in the field of machine learning are fueled by the deployment of deep neural network models. Deep neural networks models are notorious for their dependence on large amounts of labeled data for training them. Active learning is being used as a solution to train classification models with less labeled instances by selecting only the most informative instances for labeling. This is especially important when the labeled data are scarce or the labeling process is expensive. In this paper, we study the application of active learning on attributed graphs. In this setting, the data instances are represented as nodes of an attributed graph. Graph neural networks achieve the current state-of-the-art classification performance on attributed graphs. The performance of graph neural networks relies on the careful tuning of their hyperparameters, usually performed using a validation set, an additional set of labeled instances. In label scarce problems, it is realistic to use all labeled instances for training the model. In this setting, we perform a fair comparison of the existing active learning algorithms proposed for graph neural networks as well as other data types such as images and text. With empirical results, we demonstrate that state-of-the-art active learning algorithms designed for other data types do not perform well on graph-structured data. We study the problem within the framework of the exploration-vs.-exploitation trade-off and propose a new count-based exploration term. With empirical evidence on multiple benchmark graphs, we highlight the importance of complementing uncertainty-based active learning models with an exploration term.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Sun, Yuan, Andong Chen, Chaofan Chen, Tianci Xia y Xiaobing Zhao. "A Joint Model for Representation Learning of Tibetan Knowledge Graph Based on Encyclopedia". ACM Transactions on Asian and Low-Resource Language Information Processing 20, n.º 2 (30 de marzo de 2021): 1–17. http://dx.doi.org/10.1145/3447248.

Texto completo
Resumen
Learning the representation of a knowledge graph is critical to the field of natural language processing. There is a lot of research for English knowledge graph representation. However, for the low-resource languages, such as Tibetan, how to represent sparse knowledge graphs is a key problem. In this article, aiming at scarcity of Tibetan knowledge graphs, we extend the Tibetan knowledge graph by using the triples of the high-resource language knowledge graphs and Point of Information map information. To improve the representation learning of the Tibetan knowledge graph, we propose a joint model to merge structure and entity description information based on the Translating Embeddings and Convolution Neural Networks models. In addition, to solve the segmentation errors, we use character and word embedding to learn more complex information in Tibetan. Finally, the experimental results show that our model can make a better representation of the Tibetan knowledge graph than the baseline.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Piao, Yinhua, Sangseon Lee, Dohoon Lee y Sun Kim. "Sparse Structure Learning via Graph Neural Networks for Inductive Document Classification". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 11165–73. http://dx.doi.org/10.1609/aaai.v36i10.21366.

Texto completo
Resumen
Recently, graph neural networks (GNNs) have been widely used for document classification. However, most existing methods are based on static word co-occurrence graphs without sentence-level information, which poses three challenges:(1) word ambiguity, (2) word synonymity, and (3) dynamic contextual dependency. To address these challenges, we propose a novel GNN-based sparse structure learning model for inductive document classification. Specifically, a document-level graph is initially generated by a disjoint union of sentence-level word co-occurrence graphs. Our model collects a set of trainable edges connecting disjoint words between sentences, and employs structure learning to sparsely select edges with dynamic contextual dependencies. Graphs with sparse structure can jointly exploit local and global contextual information in documents through GNNs. For inductive learning, the refined document graph is further fed into a general readout function for graph-level classification and optimization in an end-to-end manner. Extensive experiments on several real-world datasets demonstrate that the proposed model outperforms most state-of-the-art results, and reveal the necessity to learn sparse structures for each document.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Monka, Sebastian, Lavdim Halilaj y Achim Rettinger. "A survey on visual transfer learning using knowledge graphs". Semantic Web 13, n.º 3 (6 de abril de 2022): 477–510. http://dx.doi.org/10.3233/sw-212959.

Texto completo
Resumen
The information perceived via visual observations of real-world phenomena is unstructured and complex. Computer vision (CV) is the field of research that attempts to make use of that information. Recent approaches of CV utilize deep learning (DL) methods as they perform quite well if training and testing domains follow the same underlying data distribution. However, it has been shown that minor variations in the images that occur when these methods are used in the real world can lead to unpredictable and catastrophic errors. Transfer learning is the area of machine learning that tries to prevent these errors. Especially, approaches that augment image data using auxiliary knowledge encoded in language embeddings or knowledge graphs (KGs) have achieved promising results in recent years. This survey focuses on visual transfer learning approaches using KGs, as we believe that KGs are well suited to store and represent any kind of auxiliary knowledge. KGs can represent auxiliary knowledge either in an underlying graph-structured schema or in a vector-based knowledge graph embedding. Intending to enable the reader to solve visual transfer learning problems with the help of specific KG-DL configurations we start with a description of relevant modeling structures of a KG of various expressions, such as directed labeled graphs, hypergraphs, and hyper-relational graphs. We explain the notion of feature extractor, while specifically referring to visual and semantic features. We provide a broad overview of knowledge graph embedding methods and describe several joint training objectives suitable to combine them with high dimensional visual embeddings. The main section introduces four different categories on how a KG can be combined with a DL pipeline: 1) Knowledge Graph as a Reviewer; 2) Knowledge Graph as a Trainee; 3) Knowledge Graph as a Trainer; and 4) Knowledge Graph as a Peer. To help researchers find meaningful evaluation benchmarks, we provide an overview of generic KGs and a set of image processing datasets and benchmarks that include various types of auxiliary knowledge. Last, we summarize related surveys and give an outlook about challenges and open issues for future research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Xie, Anze, Anders Carlsson, Jason Mohoney, Roger Waleffe, Shanan Peters, Theodoros Rekatsinas y Shivaram Venkataraman. "Demo of marius". Proceedings of the VLDB Endowment 14, n.º 12 (julio de 2021): 2759–62. http://dx.doi.org/10.14778/3476311.3476338.

Texto completo
Resumen
Graph embeddings have emerged as the de facto representation for modern machine learning over graph data structures. The goal of graph embedding models is to convert high-dimensional sparse graphs into low-dimensional, dense and continuous vector spaces that preserve the graph structure properties. However, learning a graph embedding model is a resource intensive process, and existing solutions rely on expensive distributed computation to scale training to instances that do not fit in GPU memory. This demonstration showcases Marius: a new open-source engine for learning graph embedding models over billion-edge graphs on a single machine. Marius is built around a recently-introduced architecture for machine learning over graphs that utilizes pipelining and a novel data replacement policy to maximize GPU utilization and exploit the entire memory hierarchy (including disk, CPU, and GPU memory) to scale to large instances. The audience will experience how to develop, train, and deploy graph embedding models using Marius' configuration-driven programming model. Moreover, the audience will have the opportunity to explore Marius' deployments on applications including link-prediction on WikiKG90M and reasoning queries on a paleobiology knowledge graph. Marius is available as open source software at https://marius-project.org.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Lee, Kwangyon, Haemin Jung, June Seok Hong y Wooju Kim. "Learning Knowledge Using Frequent Subgraph Mining from Ontology Graph Data". Applied Sciences 11, n.º 3 (20 de enero de 2021): 932. http://dx.doi.org/10.3390/app11030932.

Texto completo
Resumen
In many areas, vast amounts of information are rapidly accumulating in the form of ontology-based knowledge graphs, and the use of information in these forms of knowledge graphs is becoming increasingly important. This study proposes a novel method for efficiently learning frequent subgraphs (i.e., knowledge) from ontology-based graph data. An ontology-based large-scale graph is decomposed into small unit subgraphs, which are used as the unit to calculate the frequency of the subgraph. The frequent subgraphs are extracted through candidate generation and chunking processes. To verify the usefulness of the extracted frequent subgraphs, the methodology was applied to movie rating prediction. Using the frequent subgraphs as user profiles, the graph similarity between the rating graph and new item graph was calculated to predict the rating. The MovieLens dataset was used for the experiment, and a comparison showed that the proposed method outperformed other widely used recommendation methods. This study is meaningful in that it proposed an efficient method for extracting frequent subgraphs while maintaining semantic information and considering scalability in large-scale graphs. Furthermore, the proposed method can provide results that include semantic information to serve as a logical basis for rating prediction or recommendation, which existing methods are unable to provide.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Zhao, Jianan, Xiao Wang, Chuan Shi, Binbin Hu, Guojie Song y Yanfang Ye. "Heterogeneous Graph Structure Learning for Graph Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 5 (18 de mayo de 2021): 4697–705. http://dx.doi.org/10.1609/aaai.v35i5.16600.

Texto completo
Resumen
Heterogeneous Graph Neural Networks (HGNNs) have drawn increasing attention in recent years and achieved outstanding performance in many tasks. The success of the existing HGNNs relies on one fundamental assumption, i.e., the original heterogeneous graph structure is reliable. However, this assumption is usually unrealistic, since the heterogeneous graph in reality is inevitably noisy or incomplete. Therefore, it is vital to learn the heterogeneous graph structure for HGNNs rather than rely only on the raw graph structure. In light of this, we make the first attempt towards learning an optimal heterogeneous graph structure for HGNNs and propose a novel framework HGSL, which jointly performs Heterogeneous Graph Structure Learning and GNN parameters learning for classification task. Different from traditional GSL on homogeneous graph, considering the heterogeneity of different relations in heterogeneous graph, HGSL generates each relation subgraph independently. Specifically, in each generated relation subgraph, HGSL not only considers the feature similarity by generating feature similarity graph, but also considers the complex heterogeneous interactions in features and semantics by generating feature propagation graph and semantic graph. Then, these graphs are fused to a learned heterogeneous graph and optimized together with a GNN towards classification objective. Extensive experiments on real-world graphs demonstrate that the proposed framework significantly outperforms the state-of-the-art methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Garcia-Hernandez, Carlos, Alberto Fernández y Francesc Serratosa. "Learning the Edit Costs of Graph Edit Distance Applied to Ligand-Based Virtual Screening". Current Topics in Medicinal Chemistry 20, n.º 18 (24 de agosto de 2020): 1582–92. http://dx.doi.org/10.2174/1568026620666200603122000.

Texto completo
Resumen
Background: Graph edit distance is a methodology used to solve error-tolerant graph matching. This methodology estimates a distance between two graphs by determining the minimum number of modifications required to transform one graph into the other. These modifications, known as edit operations, have an edit cost associated that has to be determined depending on the problem. Objective: This study focuses on the use of optimization techniques in order to learn the edit costs used when comparing graphs by means of the graph edit distance. Methods: Graphs represent reduced structural representations of molecules using pharmacophore-type node descriptions to encode the relevant molecular properties. This reduction technique is known as extended reduced graphs. The screening and statistical tools available on the ligand-based virtual screening benchmarking platform and the RDKit were used. Results: In the experiments, the graph edit distance using learned costs performed better or equally good than using predefined costs. This is exemplified with six publicly available datasets: DUD-E, MUV, GLL&GDD, CAPST, NRLiSt BDB, and ULS-UDS. Conclusion: This study shows that the graph edit distance along with learned edit costs is useful to identify bioactivity similarities in a structurally diverse group of molecules. Furthermore, the target-specific edit costs might provide useful structure-activity information for future drug-design efforts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Zhang, Kainan, Zhipeng Cai y Daehee Seo. "Privacy-Preserving Federated Graph Neural Network Learning on Non-IID Graph Data". Wireless Communications and Mobile Computing 2023 (3 de febrero de 2023): 1–13. http://dx.doi.org/10.1155/2023/8545101.

Texto completo
Resumen
Since the concept of federated learning (FL) was proposed by Google in 2017, many applications have been combined with FL technology due to its outstanding performance in data integration, computing performance, privacy protection, etc. However, most traditional federated learning-based applications focus on image processing and natural language processing with few achievements in graph neural networks due to the graph’s nonindependent identically distributed (IID) nature. Representation learning on graph-structured data generates graph embedding, which helps machines understand graphs effectively. Meanwhile, privacy protection plays a more meaningful role in analyzing graph-structured data such as social networks. Hence, this paper proposes PPFL-GNN, a novel privacy-preserving federated graph neural network framework for node representation learning, which is a pioneer work for graph neural network-based federated learning. In PPFL-GNN, clients utilize a local graph dataset to generate graph embeddings and integrate information from other collaborative clients to utilize federated learning to produce more accurate representation results. More importantly, by integrating embedding alignment techniques in PPFL-GNN, we overcome the obstacles of federated learning on non-IID graph data and can further reduce privacy exposure by sharing preferred information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Vaghani, Dev. "An Approch for Representation of Node Using Graph Transformer Networks". International Journal for Research in Applied Science and Engineering Technology 11, n.º 1 (31 de enero de 2023): 27–37. http://dx.doi.org/10.22214/ijraset.2023.48485.

Texto completo
Resumen
Abstract: In representation learning on graphs, graph neural networks (GNNs) have been widely employed and have attained cutting-edge performance in tasks like node categorization and link prediction. However, the majority of GNNs now in use are made to learn node representations on homogenous and fixed graphs. The limits are particularly significant when learning representations on a network that has been incorrectly described or one that is heterogeneous, or made up of different kinds of nodes and edges. This study proposes Graph Transformer Networks (GTNs), which may generate new network structures by finding valuable connections between disconnected nodes in the original graph and learning efficient node representation on the new graphs end-to-end. A basic layer of GTNs called the Graph Transformer layer learns a soft selection of edge types and composite relations to produce meaningful multi-hop connections known as meta-paths. This research demonstrates that GTNs can learn new graph structures from data and tasks without any prior domain expertise and that they can then use convolution on the new graphs to provide effective node representation. GTNs outperformed state-of-the-art approaches that need predefined meta-paths from domain knowledge in all three benchmark node classification tasks without the use of domain-specific graph pre-processing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Ruf, Verena, Anna Horrer, Markus Berndt, Sarah Isabelle Hofer, Frank Fischer, Martin R. Fischer, Jan M. Zottmann, Jochen Kuhn y Stefan Küchemann. "A Literature Review Comparing Experts’ and Non-Experts’ Visual Processing of Graphs during Problem-Solving and Learning". Education Sciences 13, n.º 2 (19 de febrero de 2023): 216. http://dx.doi.org/10.3390/educsci13020216.

Texto completo
Resumen
The interpretation of graphs plays a pivotal role in education because it is relevant for understanding and representing data and comprehending concepts in various domains. Accordingly, many studies examine students’ gaze behavior by comparing different levels of expertise when interpreting graphs. This literature review presents an overview of 32 articles comparing the gaze behavior of experts and non-experts during problem-solving and learning with graphs up to January 2022. Most studies analyzed students’ dwell time, fixation duration, and fixation count on macro- and meso-, as well as on micro-level areas of interest. Experts seemed to pay more attention to relevant parts of the graph and less to irrelevant parts of a graph, in line with the information-reduction hypothesis. Experts also made more integrative eye movements within a graph in terms of dynamic metrics. However, the determination of expertise is inconsistent. Therefore, we recommend four factors that will help to better determine expertise. This review gives an overview of evaluation strategies for different types of graphs and across various domains, which could facilitate instructing students in evaluating graphs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Peng, Yun, Byron Choi y Jianliang Xu. "Graph Learning for Combinatorial Optimization: A Survey of State-of-the-Art". Data Science and Engineering 6, n.º 2 (28 de abril de 2021): 119–41. http://dx.doi.org/10.1007/s41019-021-00155-3.

Texto completo
Resumen
AbstractGraphs have been widely used to represent complex data in many applications, such as e-commerce, social networks, and bioinformatics. Efficient and effective analysis of graph data is important for graph-based applications. However, most graph analysis tasks are combinatorial optimization (CO) problems, which are NP-hard. Recent studies have focused a lot on the potential of using machine learning (ML) to solve graph-based CO problems. Most recent methods follow the two-stage framework. The first stage is graph representation learning, which embeds the graphs into low-dimension vectors. The second stage uses machine learning to solve the CO problems using the embeddings of the graphs learned in the first stage. The works for the first stage can be classified into two categories, graph embedding methods and end-to-end learning methods. For graph embedding methods, the learning of the the embeddings of the graphs has its own objective, which may not rely on the CO problems to be solved. The CO problems are solved by independent downstream tasks. For end-to-end learning methods, the learning of the embeddings of the graphs does not have its own objective and is an intermediate step of the learning procedure of solving the CO problems. The works for the second stage can also be classified into two categories, non-autoregressive methods and autoregressive methods. Non-autoregressive methods predict a solution for a CO problem in one shot. A non-autoregressive method predicts a matrix that denotes the probability of each node/edge being a part of a solution of the CO problem. The solution can be computed from the matrix using search heuristics such as beam search. Autoregressive methods iteratively extend a partial solution step by step. At each step, an autoregressive method predicts a node/edge conditioned to current partial solution, which is used to its extension. In this survey, we provide a thorough overview of recent studies of the graph learning-based CO methods. The survey ends with several remarks on future research directions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Reba, Kristjan, Matej Guid, Kati Rozman, Dušanka Janežič y Janez Konc. "Exact Maximum Clique Algorithm for Different Graph Types Using Machine Learning". Mathematics 10, n.º 1 (28 de diciembre de 2021): 97. http://dx.doi.org/10.3390/math10010097.

Texto completo
Resumen
Finding a maximum clique is important in research areas such as computational chemistry, social network analysis, and bioinformatics. It is possible to compare the maximum clique size between protein graphs to determine their similarity and function. In this paper, improvements based on machine learning (ML) are added to a dynamic algorithm for finding the maximum clique in a protein graph, Maximum Clique Dynamic (MaxCliqueDyn; short: MCQD). This algorithm was published in 2007 and has been widely used in bioinformatics since then. It uses an empirically determined parameter, Tlimit, that determines the algorithm’s flow. We have extended the MCQD algorithm with an initial phase of a machine learning-based prediction of the Tlimit parameter that is best suited for each input graph. Such adaptability to graph types based on state-of-the-art machine learning is a novel approach that has not been used in most graph-theoretic algorithms. We show empirically that the resulting new algorithm MCQD-ML improves search speed on certain types of graphs, in particular molecular docking graphs used in drug design where they determine energetically favorable conformations of small molecules in a protein binding site. In such cases, the speed-up is twofold.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía