Academic literature on the topic 'Structural Graph Representations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Structural Graph Representations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Structural Graph Representations"

1

Zhou, Xiaojie, Pengjun Zhai, and Yu Fang. "Learning Description-Based Representations for Temporal Knowledge Graph Reasoning via Attentive CNN." Journal of Physics: Conference Series 2025, no. 1 (September 1, 2021): 012003. http://dx.doi.org/10.1088/1742-6596/2025/1/012003.

Full text
Abstract:
Abstract Knowledge graphs have played a significant role in various applications and knowledge reasoning is one of the key tasks. However, the task gets more challenging when each fact is associated with a time annotation on temporal knowledge graph. Most of the existing temporal knowledge graph representation learning methods exploit structural information to learn the entity and relation representations. By these methods, those entities with similar structural information cannot be easily distinguished. Incorporating other information is an effective way to solve such problems. To address this problem, we propose a temporal knowledge graph representation learning method d-HyTE that incorporates entity descriptions. We learn structure-based representations of entities and relations and explore a deep convolutional neural network with attention to encode description-based representations of entities. The joint representation of two different representations of an entity is regarded as the final representation. We evaluate this method on link prediction and temporal scope prediction. Experimental results showed that our method d-HyTE outperformed the other baselines on many metrics.
APA, Harvard, Vancouver, ISO, and other styles
2

Malaviya, Chaitanya, Chandra Bhagavatula, Antoine Bosselut, and Yejin Choi. "Commonsense Knowledge Base Completion with Structural and Semantic Context." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2925–33. http://dx.doi.org/10.1609/aaai.v34i03.5684.

Full text
Abstract:
Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and ConceptNet) poses unique challenges compared to the much studied conventional knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs ( ∼18x more nodes in ATOMIC compared to Freebase (FB15K-237)). Importantly, this implies significantly sparser graph structures — a major challenge for existing KB completion methods that assume densely connected graphs over a relatively smaller set of nodes.In this paper, we present novel KB completion models that can address these challenges by exploiting the structural and semantic context of nodes. Specifically, we investigate two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge. We describe our method to incorporate information from both these sources in a joint model and provide the first empirical results for KB completion on ATOMIC and evaluation with ranking metrics on ConceptNet. Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1.5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency. Further analysis on model predictions shines light on the types of commonsense knowledge that language models capture well.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Yifei, Shiyang Chen, Guobin Chen, Ethan Shurberg, Hang Liu, and Pengyu Hong. "Motif-Based Graph Representation Learning with Application to Chemical Molecules." Informatics 10, no. 1 (January 11, 2023): 8. http://dx.doi.org/10.3390/informatics10010008.

Full text
Abstract:
This work considers the task of representation learning on the attributed relational graph (ARG). Both the nodes and edges in an ARG are associated with attributes/features allowing ARGs to encode rich structural information widely observed in real applications. Existing graph neural networks offer limited ability to capture complex interactions within local structural contexts, which hinders them from taking advantage of the expression power of ARGs. We propose motif convolution module (MCM), a new motif-based graph representation learning technique to better utilize local structural information. The ability to handle continuous edge and node features is one of MCM’s advantages over existing motif-based models. MCM builds a motif vocabulary in an unsupervised way and deploys a novel motif convolution operation to extract the local structural context of individual nodes, which is then used to learn higher level node representations via multilayer perceptron and/or message passing in graph neural networks. When compared with other graph learning approaches to classifying synthetic graphs, our approach is substantially better at capturing structural context. We also demonstrate the performance and explainability advantages of our approach by applying it to several molecular benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
4

Joaristi, Mikel, and Edoardo Serra. "SIR-GN: A Fast Structural Iterative Representation Learning Approach For Graph Nodes." ACM Transactions on Knowledge Discovery from Data 15, no. 6 (May 19, 2021): 1–39. http://dx.doi.org/10.1145/3450315.

Full text
Abstract:
Graph representation learning methods have attracted an increasing amount of attention in recent years. These methods focus on learning a numerical representation of the nodes in a graph. Learning these representations is a powerful instrument for tasks such as graph mining, visualization, and hashing. They are of particular interest because they facilitate the direct use of standard machine learning models on graphs. Graph representation learning methods can be divided into two main categories: methods preserving the connectivity information of the nodes and methods preserving nodes’ structural information. Connectivity-based methods focus on encoding relationships between nodes, with connected nodes being closer together in the resulting latent space. While methods preserving structure generate a latent space where nodes serving a similar structural function in the network are encoded close to each other, independently of them being connected or even close to each other in the graph. While there are a lot of works that focus on preserving node connectivity, only a few works focus on preserving nodes’ structure. Properly encoding nodes’ structural information is fundamental for many real-world applications as it has been demonstrated that this information can be leveraged to successfully solve many tasks where connectivity-based methods usually fail. A typical example is the task of node classification, i.e., the assignment or prediction of a particular label for a node. Current limitations of structural representation methods are their scalability, representation meaning, and no formal proof that guaranteed the preservation of structural properties. We propose a new graph representation learning method, called Structural Iterative Representation learning approach for Graph Nodes ( SIR-GN ). In this work, we propose two variations ( SIR-GN: GMM and SIR-GN: K-Means ) and show how our best variation SIR-GN: K-Means : (1) theoretically guarantees the preservation of graph structural similarities, (2) provides a clear meaning about its representation and a way to interpret it with a specifically designed attribution procedure, and (3) is scalable and fast to compute. In addition, from our experiment, we show that SIR-GN: K-Means is often better or, in the worst-case comparable than the existing structural graph representation learning methods present in the literature. Also, we empirically show its superior scalability and computational performance when compared to other existing approaches.
APA, Harvard, Vancouver, ISO, and other styles
5

Lyu, Gengyu, Xiang Deng, Yanan Wu, and Songhe Feng. "Beyond Shared Subspace: A View-Specific Fusion for Multi-View Multi-Label Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7647–54. http://dx.doi.org/10.1609/aaai.v36i7.20731.

Full text
Abstract:
In multi-view multi-label learning (MVML), each instance is described by several heterogeneous feature representations and associated with multiple valid labels simultaneously. Although diverse MVML methods have been proposed over the last decade, most previous studies focus on leveraging the shared subspace across different views to represent the multi-view consensus information, while it is still an open issue whether such shared subspace representation is necessary when formulating the desired MVML model. In this paper, we propose a DeepGCN based View-Specific MVML method (D-VSM) which can bypass seeking for the shared subspace representation, and instead directly encoding the feature representation of each individual view through the deep GCN to couple with the information derived from the other views. Specifically, we first construct all instances under different feature representations into the corresponding feature graphs respectively, and then integrate them into a unified graph by integrating the different feature representations of each instance. Afterwards, the graph attention mechanism is adopted to aggregate and update all nodes on the unified graph to form structural representation for each instance, where both intra-view correlations and inter-view alignments have been jointly encoded to discover the underlying semantic relations. Finally, we derive a label confidence score for each instance by averaging the label confidence of its different feature representations with the multi-label soft margin loss. Extensive experiments have demonstrated that our proposed method significantly outperforms state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Wang, Siwei Wang, Xifeng Guo, Zhenyu Zhou, and En Zhu. "Auxiliary Graph for Attribute Graph Clustering." Entropy 24, no. 10 (October 2, 2022): 1409. http://dx.doi.org/10.3390/e24101409.

Full text
Abstract:
Attribute graph clustering algorithms that include topological structural information into node characteristics for building robust representations have proven to have promising efficacy in a variety of applications. However, the presented topological structure emphasizes local links between linked nodes but fails to convey relationships between nodes that are not directly linked, limiting the potential for future clustering performance improvement. To solve this issue, we offer the Auxiliary Graph for Attribute Graph Clustering technique (AGAGC). Specifically, we construct an additional graph as a supervisor based on the node attribute. The additional graph can serve as an auxiliary supervisor that aids the present one. To generate a trustworthy auxiliary graph, we offer a noise-filtering approach. Under the supervision of both the pre-defined graph and an auxiliary graph, a more effective clustering model is trained. Additionally, the embeddings of multiple layers are merged to improve the discriminative power of representations. We offer a clustering module for a self-supervisor to make the learned representation more clustering-aware. Finally, our model is trained using a triplet loss. Experiments are done on four available benchmark datasets, and the findings demonstrate that the proposed model outperforms or is comparable to state-of-the-art graph clustering models.
APA, Harvard, Vancouver, ISO, and other styles
7

Lv, Shangwen, Daya Guo, Jingjing Xu, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, and Songlin Hu. "Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8449–56. http://dx.doi.org/10.1609/aaai.v34i05.6364.

Full text
Abstract:
Commonsense question answering aims to answer questions which require background knowledge that is not explicitly expressed in the question. The key challenge is how to obtain evidence from external knowledge and make predictions based on the evidence. Recent studies either learn to generate evidence from human-annotated evidence which is expensive to collect, or extract evidence from either structured or unstructured knowledge bases which fails to take advantages of both sources simultaneously. In this work, we propose to automatically extract evidence from heterogeneous knowledge sources, and answer questions based on the extracted evidence. Specifically, we extract evidence from both structured knowledge base (i.e. ConceptNet) and Wikipedia plain texts. We construct graphs for both sources to obtain the relational structures of evidence. Based on these graphs, we propose a graph-based approach consisting of a graph-based contextual word representation learning module and a graph-based inference module. The first module utilizes graph structural information to re-define the distance between words for learning better contextual word representations. The second module adopts graph convolutional network to encode neighbor information into the representations of nodes, and aggregates evidence with graph attention mechanism for predicting the final answer. Experimental results on CommonsenseQA dataset illustrate that our graph-based approach over both knowledge sources brings improvement over strong baselines. Our approach achieves the state-of-the-art accuracy (75.3%) on the CommonsenseQA dataset.
APA, Harvard, Vancouver, ISO, and other styles
8

Ta'aseh, Nevo, and Offer Shai. "Network Graph Theory Perspective on Skeletal Structures for Theoretical and Educational Purposes." International Journal of Mechanical Engineering Education 36, no. 4 (October 2008): 294–319. http://dx.doi.org/10.7227/ijmee.36.4.3.

Full text
Abstract:
The paper introduces an approach to the analysis of skeletal structures in which they are represented by a discrete mathematical model called graph representation. The paper shows that the reasoning upon the structure can be performed solely upon the representation, which, besides the theoretical value, presents a powerful educational tool. Students can learn skeletal structures entirely through the graph representations and derive advanced structural topics, including the conjugate theorem and the unit force method from the theorems and principles of network graph theory. The graph representations used in the paper for structures have also been applied to represent systems from different engineering disciplines. This provides students with a multidisciplinary perspective on analysis of engineering systems in general, and skeletal structures in particular.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Yu, Liang Hu, Yang Wu, and Wanfu Gao. "Graph Multihead Attention Pooling with Self-Supervised Learning." Entropy 24, no. 12 (November 29, 2022): 1745. http://dx.doi.org/10.3390/e24121745.

Full text
Abstract:
Graph neural networks (GNNs), which work with graph-structured data, have attracted considerable attention and achieved promising performance on graph-related tasks. While the majority of existing GNN methods focus on the convolutional operation for encoding the node representations, the graph pooling operation, which maps the set of nodes into a coarsened graph, is crucial for graph-level tasks. We argue that a well-defined graph pooling operation should avoid the information loss of the local node features and global graph structure. In this paper, we propose a hierarchical graph pooling method based on the multihead attention mechanism, namely GMAPS, which compresses both node features and graph structure into the coarsened graph. Specifically, a multihead attention mechanism is adopted to arrange nodes into a coarsened graph based on their features and structural dependencies between nodes. In addition, to enhance the expressiveness of the cluster representations, a self-supervised mechanism is introduced to maximize the mutual information between the cluster representations and the global representation of the hierarchical graph. Our experimental results show that the proposed GMAPS obtains significant and consistent performance improvements compared with state-of-the-art baselines on six benchmarks from the biological and social domains of graph classification and reconstruction tasks.
APA, Harvard, Vancouver, ISO, and other styles
10

Yoon, Jisung, Kai-Cheng Yang, Woo-Sung Jung, and Yong-Yeol Ahn. "Persona2vec: a flexible multi-role representations learning framework for graphs." PeerJ Computer Science 7 (March 30, 2021): e439. http://dx.doi.org/10.7717/peerj-cs.439.

Full text
Abstract:
Graph embedding techniques, which learn low-dimensional representations of a graph, are achieving state-of-the-art performance in many graph mining tasks. Most existing embedding algorithms assign a single vector to each node, implicitly assuming that a single representation is enough to capture all characteristics of the node. However, across many domains, it is common to observe pervasively overlapping community structure, where most nodes belong to multiple communities, playing different roles depending on the contexts. Here, we propose persona2vec, a graph embedding framework that efficiently learns multiple representations of nodes based on their structural contexts. Using link prediction-based evaluation, we show that our framework is significantly faster than the existing state-of-the-art model while achieving better performance.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Structural Graph Representations"

1

Gibert, Domingo Jaume. "Vector Space Embedding of Graphs via Statistics of Labelling Information." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/96240.

Full text
Abstract:
El reconeixement de patrons és la tasca que pretén distingir objectes entre diferents classes. Quan aquesta tasca es vol solucionar de forma automàtica un pas crucial és el com representar formalment els patrons a l'ordinador. En funció d'aquests formalismes, podem distingir entre el reconeixement estadístic i l'estructural. El primer descriu objectes com un conjunt de mesures col·locats en forma del que s'anomena un vector de característiques. El segon assumeix que hi ha relacions entre parts dels objectes que han de quedar explícitament representades i per tant fa servir estructures relacionals com els grafs per codificar la seva informació inherent. Els espais vectorials són una estructura matemàtica molt flexible que ha permès definir diverses maneres eficients d'analitzar patrons sota la forma de vectors de característiques. De totes maneres, la representació vectorial no és capaç d'expressar explícitament relacions binàries entre parts dels objectes i està restrigida a mesurar sempre, independentment de la complexitat dels patrons, el mateix nombre de característiques per cadascun d'ells. Les representacions en forma de graf presenten la situació contrària. Poden adaptar-se fàcilment a la complexitat inherent dels patrons però introdueixen un problema d'alta complexitat computational, dificultant el disseny d'eines eficients per al procés i l'anàlisis de patrons. Resoldre aquesta paradoxa és el principal objectiu d'aquesta tesi. La situació ideal per resoldre problemes de reconeixement de patrons seria el representar-los fent servir estructures relacionals com els grafs, i a l'hora, poder fer ús del ric repositori d'eines pel processament de dades del reconeixement estadístic. Una solució elegant a aquest problema és la de transformar el domini dels grafs en el domini dels vectors, on podem aplicar qualsevol algorisme de processament de dades. En altres paraules, assignant a cada graf un punt en un espai vectorial, automàticament tenim accés al conjunt d'algorismes del món estadístic per aplicar-los al domini dels grafs. Aquesta metodologia s'anomena graph embedding. En aquesta tesi proposem de fer una associació de grafs a vectors de característiques de forma simple i eficient fixant l'atenció en la informació d'etiquetatge dels grafs. En particular, comptem les freqüències de les etiquetes dels nodes així com de les aretes entre etiquetes determinades. Tot i la seva localitat, aquestes característiques donen una representació prou robusta de les propietats globals dels grafs. Primer tractem el cas de grafs amb etiquetes discretes, on les característiques són sencilles de calcular. El cas continu és abordat com una generalització del cas discret, on enlloc de comptar freqüències d'etiquetes, ho fem de representants d'aquestes. Ens trobem que les representacions vectorials que proposem pateixen d'alta dimensionalitat i correlació entre components, i tractem aquests problems mitjançant algorismes de selecció de característiques. També estudiem com la diversitat de diferents representacions pot ser explotada per tal de millorar el rendiment de classificadors base en el marc d'un sistema de múltiples classificadors. Finalment, amb una extensa evaluació experimental mostrem com la metodologia proposada pot ser calculada de forma eficient i com aquesta pot competir amb altres metodologies per a la comparació de grafs.
Pattern recognition is the task that aims at distinguishing objects among different classes. When such a task wants to be solved in an automatic way a crucial step is how to formally represent such patterns to the computer. Based on the different representational formalisms, we may distinguish between statistical and structural pattern recognition. The former describes objects as a set of measurements arranged in the form of what is called a feature vector. The latter assumes that relations between parts of the underlying objects need to be explicitly represented and thus it uses relational structures such as graphs for encoding their inherent information. Vector spaces are a very flexible mathematical structure that has allowed to come up with several efficient ways for the analysis of patterns under the form of feature vectors. Nevertheless, such a representation cannot explicitly cope with binary relations between parts of the objects and it is restricted to measure the exact same number of features for each pattern under study regardless of their complexity. Graph-based representations present the contrary situation. They can easily adapt to the inherent complexity of the patterns but introduce a problem of high computational complexity, hindering the design of efficient tools to process and analyze patterns. Solving this paradox is the main goal of this thesis. The ideal situation for solving pattern recognition problems would be to represent the patterns using relational structures such as graphs, and to be able to use the wealthy repository of data processing tools from the statistical pattern recognition domain. An elegant solution to this problem is to transform the graph domain into a vector domain where any processing algorithm can be applied. In other words, by mapping each graph to a point in a vector space we automatically get access to the rich set of algorithms from the statistical domain to be applied in the graph domain. Such methodology is called graph embedding. In this thesis we propose to associate feature vectors to graphs in a simple and very efficient way by just putting attention on the labelling information that graphs store. In particular, we count frequencies of node labels and of edges between labels. Although their locality, these features are able to robustly represent structurally global properties of graphs, when considered together in the form of a vector. We initially deal with the case of discrete attributed graphs, where features are easy to compute. The continuous case is tackled as a natural generalization of the discrete one, where rather than counting node and edge labelling instances, we count statistics of some representatives of them. We encounter how the proposed vectorial representations of graphs suffer from high dimensionality and correlation among components and we face these problems by feature selection algorithms. We also explore how the diversity of different embedding representations can be exploited in order to boost the performance of base classifiers in a multiple classifier systems framework. An extensive experimental evaluation finally shows how the methodology we propose can be efficiently computed and compete with other graph matching and embedding methodologies.
APA, Harvard, Vancouver, ISO, and other styles
2

Sadeghi, Kayvan. "Graphical representation of independence structures." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:86ff6155-a6b9-48f9-9dac-1ab791748072.

Full text
Abstract:
In this thesis we describe subclasses of a class of graphs with three types of edges, called loopless mixed graphs (LMGs). The class of LMGs contains almost all known classes of graphs used in the literature of graphical Markov models. We focus in particular on the subclass of ribbonless graphs (RGs), which as special cases include undirected graphs, bidirected graphs, and directed acyclic graphs, as well as ancestral graphs and summary graphs. We define a unifying interpretation of independence structure for LMGs and pairwise and global Markov properties for RGs, discuss their maximality, and, in particular, prove the equivalence of pairwise and global Markov properties for graphoids defined over the nodes of RGs. Three subclasses of LMGs (MC, summary, and ancestral graphs) capture the modified independence model after marginalisation over unobserved variables and conditioning on selection variables of variables satisfying independence restrictions represented by a directed acyclic graph (DAG). We derive algorithms to generate these graphs from a given DAG or from a graph of a specific subclass, and we study the relationships between these classes of graphs. Finally, a manual and codes are provided that explain methods and functions in R for implementing and generating various graphs studied in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
3

Tsitsulin, Anton [Verfasser]. "Similarities and Representations of Graph Structures / Anton Tsitsulin." Bonn : Universitäts- und Landesbibliothek Bonn, 2021. http://d-nb.info/1238687229/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gurung, Topraj. "Compact connectivity representation for triangle meshes." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47709.

Full text
Abstract:
Many digital models used in entertainment, medical visualization, material science, architecture, Geographic Information Systems (GIS), and mechanical Computer Aided Design (CAD) are defined in terms of their boundaries. These boundaries are often approximated using triangle meshes. The complexity of models, which can be measured by triangle count, increases rapidly with the precision of scanning technologies and with the need for higher resolution. An increase in mesh complexity results in an increase of storage requirement, which in turn increases the frequency of disk access or cache misses during mesh processing, and hence decreases performance. For example, in a test application involving a mesh with 55 million triangles in a machine with 4GB of memory versus a machine with 1GB of memory, performance decreases by a factor of about 6000 because of memory thrashing. To help reduce memory thrashing, we focus on decreasing the average storage requirement per triangle measured in 32-bit integer references per triangle (rpt). This thesis covers compact connectivity representation for triangle meshes and discusses four data structures: 1. Sorted Opposite Table (SOT), which uses 3 rpt and has been extended to support tetrahedral meshes. 2. Sorted Quad (SQuad), which uses about 2 rpt and has been extended to support streaming. 3. Laced Ring (LR), which uses about 1 rpt and offers an excellent compromise between storage compactness and performance of mesh traversal operators. 4. Zipper, an extension of LR, which uses about 6 bits per triangle (equivalently 0.19 rpt), therefore is the most compact representation. The triangle mesh data structures proposed in this thesis support the standard set of mesh connectivity operators introduced by the previously proposed Corner Table at an amortized constant time complexity. They can be constructed in linear time and space from the Corner Table or any equivalent representation. If geometry is stored as 16-bit coordinates, using Zipper instead of the Corner Table increases the size of the mesh that can be stored in core memory by a factor of about 8.
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, John Boaz T. "Deep Learning on Graph-structured Data." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/570.

Full text
Abstract:
In recent years, deep learning has made a significant impact in various fields – helping to push the state-of-the-art forward in many application domains. Convolutional Neural Networks (CNN) have been applied successfully to tasks such as visual object detection, image super-resolution, and video action recognition while Long Short-term Memory (LSTM) and Transformer networks have been used to solve a variety of challenging tasks in natural language processing. However, these popular deep learning architectures (i.e., CNNs, LSTMs, and Transformers) can only handle data that can be represented as grids or sequences. Due to this limitation, many existing deep learning approaches do not generalize to problem domains where the data is represented as graphs – social networks in social network analysis or molecular graphs in chemoinformatics, for instance. The goal of this thesis is to help bridge the gap by studying deep learning solutions that can handle graph data naturally. In particular, we explore deep learning-based approaches in the following areas. 1. Graph Attention. In the real-world, graphs can be both large – with many complex patterns – and noisy which can pose a problem for effective graph mining. An effective way to deal with this issue is to use an attention-based deep learning model. An attention mechanism allows the model to focus on task-relevant parts of the graph which helps the model make better decisions. We introduce a model for graph classification which uses an attention-guided walk to bias exploration towards more task-relevant parts of the graph. For the task of node classification, we study a different model – one with an attention mechanism which allows each node to select the most task-relevant neighborhood to integrate information from. 2. Graph Representation Learning. Graph representation learning seeks to learn a mapping that embeds nodes, and even entire graphs, as points in a low-dimensional continuous space. The function is optimized such that the geometric distance between objects in the embedding space reflect some sort of similarity based on the structure of the original graph(s). We study the problem of learning time-respecting embeddings for nodes in a dynamic network. 3. Brain Network Discovery. One of the fundamental tasks in functional brain analysis is the task of brain network discovery. The brain is a complex structure which is made up of various brain regions, many of which interact with each other. The objective of brain network discovery is two-fold. First, we wish to partition voxels – from a functional Magnetic Resonance Imaging scan – into functionally and spatially cohesive regions (i.e., nodes). Second, we want to identify the relationships (i.e., edges) between the discovered regions. We introduce a deep learning model which learns to construct a group-cohesive partition of voxels from the scans of multiple individuals in the same group. We then introduce a second model which can recover a hierarchical set of brain regions, allowing us to examine the functional organization of the brain at different levels of granularity. Finally, we propose a model for the problem of unified and group-contrasting edge discovery which aims to discover discriminative brain networks that can help us to better distinguish between samples from different classes.
APA, Harvard, Vancouver, ISO, and other styles
6

Bandyopadhyay, Bortik. "Querying Structured Data via Informative Representations." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595447189545086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gkirtzou, Aikaterini. "Sparsity regularization and graph-based representation in medical imaging." Phd thesis, Ecole Centrale Paris, 2013. http://tel.archives-ouvertes.fr/tel-00960163.

Full text
Abstract:
Medical images have been used to depict the anatomy or function. Their high-dimensionality and their non-linearity nature makes their analysis a challenging problem. In this thesis, we address the medical image analysis from the viewpoint of statistical learning theory. First, we examine regularization methods for analyzing MRI data. In this direction, we introduce a novel regularization method, the k-support regularized Support Vector Machine. This algorithm extends the 1 regularized SVM to a mixed norm of both '1 and '2 norms. We evaluate our algorithm in a neuromuscular disease classification task. Second, we approach the problem of graph representation and comparison for analyzing medical images. Graphs are a technique to represent data with inherited structure. Despite the significant progress in graph kernels, existing graph kernels focus on either unlabeled or discretely labeled graphs, while efficient and expressive representation and comparison of graphs with continuous high-dimensional vector labels, remains an open research problem. We introduce a novel method, the pyramid quantized Weisfeiler-Lehman graph representation to tackle the graph comparison problem for continuous vector labeled graphs. Our algorithm considers statistics of subtree patterns based on the Weisfeiler-Lehman algorithm and uses a pyramid quantization strategy to determine a logarithmic number of discrete labelings. We evaluate our algorithm on two different tasks with real datasets. Overall, as graphs are fundamental mathematical objects and regularization methods are used to control ill-pose problems, both proposed algorithms are potentially applicable to a wide range of domains.
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, Chong. "Integrating Feature and Graph Learning with Factorization Models for Low-Rank Data Representation." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1464.

Full text
Abstract:
Representing and handling high-dimensional data has been increasingly ubiquitous in many real world-applications, such as computer vision, machine learning, and data mining. High-dimensional data usually have intrinsic low-dimensional structures, which are suitable for subsequent data processing. As a consequent, it has been a common demand to find low-dimensional data representations in many machine learning and data mining problems. Factorization methods have been impressive in recovering intrinsic low-dimensional structures of the data. When seeking low-dimensional representation of the data, traditional methods mainly face two challenges: 1) how to discover the most variational features/information from the data; 2) how to measure accurate nonlinear relationships of the data. As a solution to these challenges, traditional methods usually make use of a two-step approach by performing feature selection and manifold construction followed by further data processing, which omits the dependence between these learning tasks and produce inaccurate data representation. To resolve these problems, we propose to integrate feature learning and graph learning with factorization model, which allows the goals of learning features, constructing manifold, and seeking new data representation to mutually enhance and lead to powerful data representation capability. Moreover, it has been increasingly common that 2-dimensional (2D) data often have high dimensions of features, where each example of 2D data is a matrix with its elements being features. For such data, traditional data usually convert them to 1-dimensional vectorial data before data processing, which severely damages inherent structures of such data. We propose to directly use 2D data for seeking new representation, which enables the model to preserve inherent 2D structures of the data. We propose to seek projection directions to find the subspaces, in which spatial information is maximumly preserved. Also, manifold and new data representation are learned in these subspaces, such that the manifold are clean and the new representation is discriminative. Consequently, seeking projections, learning manifold and constructing new representation mutually enhance and lead to powerful data representation technique.
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Pilho. "E-model event-based graph data model theory and implementation /." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29608.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Madisetti, Vijay; Committee Member: Jayant, Nikil; Committee Member: Lee, Chin-Hui; Committee Member: Ramachandran, Umakishore; Committee Member: Yalamanchili, Sudhakar. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
10

Soares, Telma Woerle de Lima. "Estruturas de dados eficientes para algoritmos evolutivos aplicados a projeto de redes." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-28052009-163303/.

Full text
Abstract:
Problemas de projeto de redes (PPRs) são muito importantes uma vez que envolvem uma série de aplicações em áreas da engenharia e ciências. Para solucionar as limitações de algoritmos convencionais para PPRs que envolvem redes complexas do mundo real (em geral modeladas por grafos completos ou mesmo esparsos de larga-escala), heurísticas, como os algoritmos evolutivos (EAs), têm sido investigadas. Trabalhos recentes têm mostrado que estruturas de dados adequadas podem melhorar significativamente o desempenho de EAs para PPRs. Uma dessas estruturas de dados é a representação nó-profundidade (NDE, do inglês Node-depth Encoding). Em geral, a aplicação de EAs com a NDE tem apresentado resultados relevantes para PPRs de larga-escala. Este trabalho investiga o desenvolvimento de uma nova representação, baseada na NDE, chamada representação nó-profundidade-grau (NDDE, do inglês Node-depth-degree Encoding). A NDDE é composta por melhorias nos operadores existentes da NDE e pelo desenvolvimento de novos operadores de reprodução possibilitando a recombinação de soluções. Nesse sentido, desenvolveu-se um operador de recombinação capaz de lidar com grafos não-completos e completos, chamado EHR (do inglês, Evolutionary History Recombination Operator). Foram também desenvolvidos operadores de recombinação que lidam somente com grafos completos, chamados de NOX e NPBX. Tais melhorias tem como objetivo manter relativamente baixa a complexidade computacional dos operadores para aumentar o desempenho de EAs para PPRs de larga-escala. A análise de propriedades de representações mostrou que a NDDE possui redundância, assim, foram propostos mecanismos para evitá-la. Essa análise mostrou também que o EHR possui baixa complexidade de tempo e não possui tendência, além de revelar que o NOX e o NPBX possuem uma tendência para árvores com topologia de estrela. A aplicação de EAs usando a NDDE para PPRs clássicos envolvendo grafos completos, tais como árvore geradora de comunicação ótima, árvore geradora mínima com restrição de grau e uma árvore máxima, mostrou que, quanto maior o tamanho das instâncias do PPR, melhor é o desempenho relativo da técnica em comparação com os resultados obtidos com outros EAs para PPRs da literatura. Além desses problemas, um EA utilizando a NDE com o operador EHR foi aplicado ao PPR do mundo real de reconfiguração de sistemas de distribuição de energia elétrica (envolvendo grafos esparsos). Os resultados mostram que o EHR possibilita reduzir significativamente o tempo de convergência do EA
Network design problems (NDPs) are very important since they involve several applications from areas of Engineering and Sciences. In order to solve the limitations of traditional algorithms for NDPs that involve real world complex networks (in general, modeled by large-scale complete or sparse graphs), heuristics, such as evolutionary algorithms (EAs), have been investigated. Recent researches have shown that appropriate data structures can improve EA performance when applied to NDPs. One of these data structures is the Node-depth Encoding (NDE). In general, the performance of EAs with NDE has presented relevant results for large-scale NDPs. This thesis investigates the development of a new representation, based on NDE, called Node-depth-degree Encoding (NDDE). The NDDE is composed for improvements of the NDE operators and the development of new reproduction operators that enable the recombination of solutions. In this way, we developed a recombination operator to work with both non-complete and complete graphs, called EHR (Evolutionary History Recombination Operator). We also developed two other operators to work only with complete graphs, named NOX and NPBX. These improvements have the advantage of retaining the computational complexity of the operators relatively low in order to improve the EA performance. The analysis of representation properties have shown that NDDE is a redundant representation and, for this reason, we proposed some strategies to avoid it. This analysis also showed that EHR has low running time and it does not have bias, moreover, it revealed that NOX and NPBX have bias to trees like stars. The application of an EA using the NDDE to classic NDPs, such as, optimal communication spanning tree, degree-constraint minimum spanning tree and one-max tree, showed that the larger the instance is, the better the performance will be in comparison whit other EAs applied to NDPs in the literatura. An EA using the NDE with EHR was applied to a real-world NDP of reconfiguration of energy distribution systems. The results showed that EHR significantly decrease the convergence time of the EA
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Structural Graph Representations"

1

Marie-Laure, Mugnier, ed. Graph-based knowledge representation: Computational foundations of conceptual graphs. New York: Springer, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Marie-Laure, Mugnier, ed. Graph-based knowledge representation: Computational foundations of conceptual graphs. New York: Springer, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cochez, Michael, Madalina Croitoru, Pierre Marquis, and Sebastian Rudolph, eds. Graph Structures for Knowledge Representation and Reasoning. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72308-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Croitoru, Madalina, Pierre Marquis, Sebastian Rudolph, and Gem Stapleton, eds. Graph Structures for Knowledge Representation and Reasoning. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-28702-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Croitoru, Madalina, Sebastian Rudolph, Nic Wilson, John Howse, and Olivier Corby, eds. Graph Structures for Knowledge Representation and Reasoning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29449-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Croitoru, Madalina, Sebastian Rudolph, Stefan Woltran, and Christophe Gonzales, eds. Graph Structures for Knowledge Representation and Reasoning. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-04534-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Croitoru, Madalina, Pierre Marquis, Sebastian Rudolph, and Gem Stapleton, eds. Graph Structures for Knowledge Representation and Reasoning. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-78102-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

M, Tepfenhart William, Dick Judith P, and Sowa John F, eds. Conceptual structures, current practices: Second International Conference on Conceptual Structures, ICCS'94, College Park, Maryland, USA, August 16-20, 1994 : proceedings. Berlin: Springer-Verlag, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

International Conference on Conceptual Structures (4th 1996 Sydney, N.S.W.). Conceptual structures: Knowledge representation as interlingua : 4th International Conference on Conceptual Structures, ICCS '96, Sydney, Australia, August 19-22, 1996 : proceedings. Berlin: Springer, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dickson, Lukose, ed. Conceptual structures: Fulfilling Peirce's dream : fifth International Conference on Conceptual Structures, ICCS'97, Seattle, Washington, USA, August 3-8, 1997 : proceedings. Berlin: Springer, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Structural Graph Representations"

1

Erus, Güray, and Nicolas Loménie. "Automatic Learning of Structural Models of Cartographic Objects." In Graph-Based Representations in Pattern Recognition, 273–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-31988-7_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fischer, Andreas, Seiichi Uchida, Volkmar Frinken, Kaspar Riesen, and Horst Bunke. "Improving Hausdorff Edit Distance Using Structural Node Context." In Graph-Based Representations in Pattern Recognition, 148–57. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-18224-7_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sanromà, Gerard, René Alquézar, and Francesc Serratosa. "Smooth Simultaneous Structural Graph Matching and Point-Set Registration." In Graph-Based Representations in Pattern Recognition, 142–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20844-7_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Eisenstat, Stanley C., and Joseph W. H. Liu. "Structural Representations of Schur Complements in Sparse Matrices." In Graph Theory and Sparse Matrix Computation, 85–100. New York, NY: Springer New York, 1993. http://dx.doi.org/10.1007/978-1-4613-8369-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Arrivault, Denis, Noël Richard, Christine Fernandez-Maloigne, and Philippe Bouyer. "Collaboration Between Statistical and Structural Approaches for Old Handwritten Characters Recognition." In Graph-Based Representations in Pattern Recognition, 291–300. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-31988-7_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Solé-Ribalta, Albert, and Francesc Serratosa. "A Structural and Semantic Probabilistic Model for Matching and Representing a Set of Graphs." In Graph-Based Representations in Pattern Recognition, 164–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02124-4_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, X. Y., and H. Bunke. "Including geometry in graph representations: A quadratic-time graph isomorphism algorithm and its applications." In Advances in Structural and Syntactical Pattern Recognition, 110–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61577-6_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Washietl, Stefan, and Tanja GesellGesell. "Graph Representations and Algorithms in Computational Biology of RNA Secondary Structure." In Structural Analysis of Complex Networks, 421–37. Boston: Birkhäuser Boston, 2010. http://dx.doi.org/10.1007/978-0-8176-4789-6_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sanders, Peter, Kurt Mehlhorn, Martin Dietzfelbinger, and Roman Dementiev. "Graph Representation." In Sequential and Parallel Algorithms and Data Structures, 259–69. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25209-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ware, Colin. "The Visual Representation of Information Structures." In Graph Drawing, 1–4. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44541-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Structural Graph Representations"

1

Xu, Jiacheng, Xipeng Qiu, Kan Chen, and Xuanjing Huang. "Knowledge Graph Representation with Jointly Structural and Textual Encoding." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/183.

Full text
Abstract:
The objective of knowledge graph embedding is to encode both entities and relations of knowledge graphs into continuous low-dimensional vector spaces. Previously, most works focused on symbolic representation of knowledge graph with structure information, which can not handle new entities or entities with few facts well. In this paper, we propose a novel deep architecture to utilize both structural and textual information of entities. Specifically, we introduce three neural models to encode the valuable information from text description of entity, among which an attentive model can select related information as needed. Then, a gating mechanism is applied to integrate representations of structure and text into a unified architecture. Experiments show that our models outperform baseline and obtain state-of-the-art results on link prediction and triplet classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
2

Borutta, Felix, Julian Busch, Evgeniy Faerman, Adina Klink, and Matthias Schubert. "Structural Graph Representations based on Multiscale Local Network Topologies." In WI '19: IEEE/WIC/ACM International Conference on Web Intelligence. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3350546.3352505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dasoulas, George, Ludovic Dos Santos, Kevin Scaman, and Aladin Virmaux. "Coloring Graph Neural Networks for Node Disambiguation." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/294.

Full text
Abstract:
In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs). More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes. Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations. Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Rao, Haocong, Shihao Xu, Xiping Hu, Jun Cheng, and Bin Hu. "Multi-Level Graph Encoding with Structural-Collaborative Relation Learning for Skeleton-Based Person Re-Identification." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/135.

Full text
Abstract:
Skeleton-based person re-identification (Re-ID) is an emerging open topic providing great value for safety-critical applications. Existing methods typically extract hand-crafted features or model skeleton dynamics from the trajectory of body joints, while they rarely explore valuable relation information contained in body structure or motion. To fully explore body relations, we construct graphs to model human skeletons from different levels, and for the first time propose a Multi-level Graph encoding approach with Structural-Collaborative Relation learning (MG-SCR) to encode discriminative graph features for person Re-ID. Specifically, considering that structurally-connected body components are highly correlated in a skeleton, we first propose a multi-head structural relation layer to learn different relations of neighbor body-component nodes in graphs, which helps aggregate key correlative features for effective node representations. Second, inspired by the fact that body-component collaboration in walking usually carries recognizable patterns, we propose a cross-level collaborative relation layer to infer collaboration between different level components, so as to capture more discriminative skeleton graph features. Finally, to enhance graph dynamics encoding, we propose a novel self-supervised sparse sequential prediction task for model pre-training, which facilitates encoding high-level graph semantics for person Re-ID. MG-SCR outperforms state-of-the-art skeleton-based methods, and it achieves superior performance to many multi-modal methods that utilize extra RGB or depth features. Our codes are available at https://github.com/Kali-Hac/MG-SCR.
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, See Hian, Feng Ji, and Wee Peng Tay. "SGAT: Simplicial Graph Attention Network." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/443.

Full text
Abstract:
Heterogeneous graphs have multiple node and edge types and are semantically richer than homogeneous graphs. To learn such complex semantics, many graph neural network approaches for heterogeneous graphs use metapaths to capture multi-hop interactions between nodes. Typically, features from non-target nodes are not incorporated into the learning procedure. However, there can be nonlinear, high-order interactions involving multiple nodes or edges. In this paper, we present Simplicial Graph Attention Network (SGAT), a simplicial complex approach to represent such high-order interactions by placing features from non-target nodes on the simplices. We then use attention mechanisms and upper adjacencies to generate representations. We empirically demonstrate the efficacy of our approach with node classification tasks on heterogeneous graph datasets and further show SGAT's ability in extracting structural information by employing random node features. Numerical experiments indicate that SGAT performs better than other current state-of-the-art heterogeneous graph learning methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Lu, Yuyin, Xin Cheng, Ziran Liang, and Yanghui Rao. "Graph-based Dynamic Word Embeddings." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/594.

Full text
Abstract:
As time goes by, language evolves with word semantics changing. Unfortunately, traditional word embedding methods neglect the evolution of language and assume that word representations are static. Although contextualized word embedding models can capture the diverse representations of polysemous words, they ignore temporal information as well. To tackle the aforementioned challenges, we propose a graph-based dynamic word embedding (GDWE) model, which focuses on capturing the semantic drift of words continually. We introduce word-level knowledge graphs (WKGs) to store short-term and long-term knowledge. WKGs can provide rich structural information as supplement of lexical information, which help enhance the word embedding quality and capture semantic drift quickly. Theoretical analysis and extensive experiments validate the effectiveness of our GDWE on dynamic word embedding learning.
APA, Harvard, Vancouver, ISO, and other styles
7

Hahn, Elad, and Offer Shai. "A Single Universal Construction Rule for the Structural Synthesis of Mechanisms." In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-59133.

Full text
Abstract:
In the field of structural synthesis of mechanisms several synthesis methods have been developed using different approaches. One of the more interesting approaches was that of bottom-up construction via the combination of modular structural groups, known as Assur groups. This approach is combined with new graph representations of mechanisms taken from rigidity theory, capable of representing all the different types of planar and spatial mechanisms. With the strong mathematical base of rigidity theory, a new synthesis method is proposed based on Assur groups, which are reformulated in terms of graph theory and renamed Assur Graphs. Using a single universal construction rule, Assur Graphs of different types and of any number of links are constructed, creating a complete set of building blocks for the synthesis of feasible mechanisms. As its name implies, the single universal construction is applicable for mechanisms of all types of joints and links, for planar or spatial motion.
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Binbin, Zhengwei Wu, Jun Zhou, Ziqi Liu, Zhigang Huangfu, Zhiqiang Zhang, and Chaochao Chen. "MERIT: Learning Multi-level Representations on Temporal Graphs." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/288.

Full text
Abstract:
Recently, representation learning on temporal graphs has drawn increasing attention, which aims at learning temporal patterns to characterize the evolving nature of dynamic graphs in real-world applications. Despite effectiveness, these methods commonly ignore the individual- and combinatorial-level patterns derived from different types of interactions (e.g.,user-item), which are at the heart of the representation learning on temporal graphs. To fill this gap, we propose MERIT, a novel multi-level graph attention network for inductive representation learning on temporal graphs.We adaptively embed the original timestamps to a higher, continuous dimensional space for learn-ing individual-level periodicity through Personalized Time Encoding (PTE) module. Furthermore, we equip MERIT with Continuous time and Con-text aware Attention (Coco-Attention) mechanism which chronologically locates most relevant neighbors by jointly capturing multi-level context on temporal graphs. Finally, MERIT performs multiple aggregations and propagations to explore and exploit high-order structural information for down-stream tasks. Extensive experiments on four public datasets demonstrate the effectiveness of MERITon both (inductive / transductive) link prediction and node classification task.
APA, Harvard, Vancouver, ISO, and other styles
9

Ju, Wei, Xiao Luo, Meng Qu, Yifan Wang, Chong Chen, Minghua Deng, Xian-Sheng Hua, and Ming Zhang. "TGNN: A Joint Semi-supervised Framework for Graph-level Classification." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/295.

Full text
Abstract:
This paper studies semi-supervised graph classification, a crucial task with a wide range of applications in social network analysis and bioinformatics. Recent works typically adopt graph neural networks to learn graph-level representations for classification, failing to explicitly leverage features derived from graph topology (e.g., paths). Moreover, when labeled data is scarce, these methods are far from satisfactory due to their insufficient topology exploration of unlabeled data. We address the challenge by proposing a novel semi-supervised framework called Twin Graph Neural Network (TGNN). To explore graph structural information from complementary views, our TGNN has a message passing module and a graph kernel module. To fully utilize unlabeled data, for each module, we calculate the similarity of each unlabeled graph to other labeled graphs in the memory bank and our consistency loss encourages consistency between two similarity distributions in different embedding spaces. The two twin modules collaborate with each other by exchanging instance similarity knowledge to fully explore the structure information of both labeled and unlabeled data. We evaluate our TGNN on various public datasets and show that it achieves strong performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Lyu, Gengyu, Yanan Wu, and Songhe Feng. "Deep Graph Matching for Partial Label Learning." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/459.

Full text
Abstract:
Partial Label Learning (PLL) aims to learn from training data where each instance is associated with a set of candidate labels, among which only one is correct. In this paper, we formulate the task of PLL problem as an ``instance-label'' matching selection problem, and propose a DeepGNN-based graph matching PLL approach to solve it. Specifically, we first construct all instances and labels as graph nodes into two different graphs respectively, and then integrate them into a unified matching graph by connecting each instance to its candidate labels. Afterwards, the graph attention mechanism is adopted to aggregate and update all nodes state on the instance graph to form structural representations for each instance. Finally, each candidate label is embedded into its corresponding instance and derives a matching affinity score for each instance-label correspondence with a progressive cross-entropy loss. Extensive experiments on various data sets have demonstrated the superiority of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography