Gotowa bibliografia na temat „Hypergraph-structure”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Hypergraph-structure”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Hypergraph-structure"

1

Xu, Jinhuan, Liang Xiao i Jingxiang Yang. "Unified Low-Rank Subspace Clustering with Dynamic Hypergraph for Hyperspectral Image". Remote Sensing 13, nr 7 (2.04.2021): 1372. http://dx.doi.org/10.3390/rs13071372.

Pełny tekst źródła
Streszczenie:
Low-rank representation with hypergraph regularization has achieved great success in hyperspectral imagery, which can explore global structure, and further incorporate local information. Existing hypergraph learning methods only construct the hypergraph by a fixed similarity matrix or are adaptively optimal in original feature space; they do not update the hypergraph in subspace-dimensionality. In addition, the clustering performance obtained by the existing k-means-based clustering methods is unstable as the k-means method is sensitive to the initialization of the cluster centers. In order to address these issues, we propose a novel unified low-rank subspace clustering method with dynamic hypergraph for hyperspectral images (HSIs). In our method, the hypergraph is adaptively learned from the low-rank subspace feature, which can capture a more complex manifold structure effectively. In addition, we introduce a rotation matrix to simultaneously learn continuous and discrete clustering labels without any relaxing information loss. The unified model jointly learns the hypergraph and the discrete clustering labels, in which the subspace feature is adaptively learned by considering the optimal dynamic hypergraph with the self-taught property. The experimental results on real HSIs show that the proposed methods can achieve better performance compared to eight state-of-the-art clustering methods.
Style APA, Harvard, Vancouver, ISO itp.
2

Feng, Yifan, Haoxuan You, Zizhao Zhang, Rongrong Ji i Yue Gao. "Hypergraph Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3558–65. http://dx.doi.org/10.1609/aaai.v33i01.33013558.

Pełny tekst źródła
Streszczenie:
In this paper, we present a hypergraph neural networks (HGNN) framework for data representation learning, which can encode high-order data correlation in a hypergraph structure. Confronting the challenges of learning representation for complex data in real practice, we propose to incorporate such data structure in a hypergraph, which is more flexible on data modeling, especially when dealing with complex data. In this method, a hyperedge convolution operation is designed to handle the data correlation during representation learning. In this way, traditional hypergraph learning procedure can be conducted using hyperedge convolution operations efficiently. HGNN is able to learn the hidden layer representation considering the high-order data structure, which is a general framework considering the complex data correlations. We have conducted experiments on citation network classification and visual object recognition tasks and compared HGNN with graph convolutional networks and other traditional methods. Experimental results demonstrate that the proposed HGNN method outperforms recent state-of-theart methods. We can also reveal from the results that the proposed HGNN is superior when dealing with multi-modal data compared with existing methods.
Style APA, Harvard, Vancouver, ISO itp.
3

Liu, Jian, Dong Chen, Jingyan Li i Jie Wu. "Neighborhood hypergraph model for topological data analysis". Computational and Mathematical Biophysics 10, nr 1 (1.01.2022): 262–80. http://dx.doi.org/10.1515/cmb-2022-0142.

Pełny tekst źródła
Streszczenie:
Abstract Hypergraph, as a generalization of the notions of graph and simplicial complex, has gained a lot of attention in many fields. It is a relatively new mathematical model to describe the high-dimensional structure and geometric shapes of data sets. In this paper,we introduce the neighborhood hypergraph model for graphs and combine the neighborhood hypergraph model with the persistent (embedded) homology of hypergraphs. Given a graph,we can obtain a neighborhood complex introduced by L. Lovász and a filtration of hypergraphs parameterized by aweight function on the power set of the vertex set of the graph. Theweight function can be obtained by the construction fromthe geometric structure of graphs or theweights on the vertices of the graph. We show the persistent theory of such filtrations of hypergraphs. One typical application of the persistent neighborhood hypergraph is to distinguish the planar square structure of cisplatin and transplatin. Another application of persistent neighborhood hypergraph is to describe the structure of small fullerenes such as C20. The bond length and the number of adjacent carbon atoms of a carbon atom can be derived from the persistence diagram. Moreover, our method gives a highly matched stability prediction (with a correlation coefficient 0.9976) of small fullerene molecules.
Style APA, Harvard, Vancouver, ISO itp.
4

Yang, Zhe, Liangkui Xu i Lei Zhao. "Multimodal Feature Fusion Based Hypergraph Learning Model". Computational Intelligence and Neuroscience 2022 (16.05.2022): 1–13. http://dx.doi.org/10.1155/2022/9073652.

Pełny tekst źródła
Streszczenie:
Hypergraph learning is a new research hotspot in the machine learning field. The performance of the hypergraph learning model depends on the quality of the hypergraph structure built by different feature extraction methods as well as its incidence matrix. However, the existing models are all hypergraph structures built based on one feature extraction method, with limited feature extraction and abstract expression ability. This paper proposed a multimodal feature fusion method, which firstly built a single modal hypergraph structure based on different feature extraction methods, and then extended the hypergraph incidence matrix and weight matrix of different modals. The extended matrices fuse the multimodal abstract feature and an expanded Markov random walk range during model learning, with stronger feature expression ability. However, the extended multimodal incidence matrix has a high scale and high computational cost. Therefore, the Laplacian matrix fusion method was proposed, which performed Laplacian matrix transformation on the incidence matrix and weight matrix of every model, respectively, and then conducted a weighted superposition on these Laplacian matrices for subsequent model training. The tests on four different types of datasets indicate that the hypergraph learning model obtained after multimodal feature fusion has a better classification performance than the single modal model. After Laplace matrix fusion, the average time can be reduced by about 40% compared with the extended incidence matrix, the classification performance can be further improved, and the index F1 can be improved by 8.4%.
Style APA, Harvard, Vancouver, ISO itp.
5

Mahmood Shuker, Faiza. "Improved Blockchain Network Performance using Hypergraph Structure". Journal of Engineering and Applied Sciences 14, nr 2 (20.11.2019): 5579–84. http://dx.doi.org/10.36478/jeasci.2019.5579.5584.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Peng, Hao, Cheng Qian, Dandan Zhao, Ming Zhong, Jianmin Han i Wei Wang. "Targeting attack hypergraph networks". Chaos: An Interdisciplinary Journal of Nonlinear Science 32, nr 7 (lipiec 2022): 073121. http://dx.doi.org/10.1063/5.0090626.

Pełny tekst źródła
Streszczenie:
In modern systems, from brain neural networks to social group networks, pairwise interactions are not sufficient to express higher-order relationships. The smallest unit of their internal function is not composed of a single functional node but results from multiple functional nodes acting together. Therefore, researchers adopt the hypergraph to describe complex systems. The targeted attack on random hypergraph networks is still a problem worthy of study. This work puts forward a theoretical framework to analyze the robustness of random hypergraph networks under the background of a targeted attack on nodes with high or low hyperdegrees. We discovered the process of cascading failures and the giant connected cluster (GCC) of the hypergraph network under targeted attack by associating the simple mapping of the factor graph with the hypergraph and using percolation theory and generating function. On random hypergraph networks, we do Monte-Carlo simulations and find that the theoretical findings match the simulation results. Similarly, targeted attacks are more effective than random failures in disintegrating random hypergraph networks. The threshold of the hypergraph network grows as the probability of high hyperdegree nodes being deleted increases, indicating that the network’s resilience becomes more fragile. When considering real-world scenarios, our conclusions are validated by real-world hypergraph networks. These findings will help us understand the impact of the hypergraph’s underlying structure on network resilience.
Style APA, Harvard, Vancouver, ISO itp.
7

Xu, Xixia, Qi Zou i Xue Lin. "Adaptive Hypergraph Neural Network for Multi-Person Pose Estimation". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 3 (28.06.2022): 2955–63. http://dx.doi.org/10.1609/aaai.v36i3.20201.

Pełny tekst źródła
Streszczenie:
This paper proposes a novel two-stage hypergraph-based framework, dubbed ADaptive Hypergraph Neural Network (AD-HNN) to estimate multiple human poses from a single image, with a keypoint localization network and an Adaptive-Pose Hypergraph Neural Network (AP-HNN) added onto the former network. For providing better guided representations of AP-HNN, we employ a Semantic Interaction Convolution (SIC) module within the initial localization network to acquire more explicit predictions. Build upon this, we design a novel adaptive hypergraph to represent a human body for capturing high-order semantic relations among different joints. Notably, it can adaptively adjust the relations between joints and seek the most reasonable structure for the variable poses to benefit the keypoint localization. These two stages are combined to be trained in an end-to-end fashion. Unlike traditional Graph Convolutional Networks (GCNs) that are based on a fixed tree structure, AP-HNN can deal with ambiguity in human pose estimation. Experimental results demonstrate that the AD-HNN achieves state-of-the-art performance both on the MS-COCO, MPII and CrowdPose datasets.
Style APA, Harvard, Vancouver, ISO itp.
8

Huang, Yuan, Liping Wang, Xueying Wang i Wei An. "Joint Probabilistic Hypergraph Matching Labeled Multi-Bernoulli Filter for Rigid Target Tracking". Applied Sciences 10, nr 1 (20.12.2019): 99. http://dx.doi.org/10.3390/app10010099.

Pełny tekst źródła
Streszczenie:
The likelihood determined by the distance between measurements and predicted states of targets is widely used in many filters for data association. However, if the actual motion model of targets is not coincided with the preset dynamic motion model, this criterion will lead to poor performance when close-space targets are tracked. For rigid target tracking task, the structure of rigid targets can be exploited to improve the data association performance. In this paper, the structure of the rigid target is represented as a hypergraph, and the problem of data association is formulated as a hypergraph matching problem. However, the performance of hypergraph matching degrades if there are missed detections and clutter. To overcome this limitation, we propose a joint probabilistic hypergraph matching labeled multi-Bernoulli (JPHGM-LMB) filter with all undetected cases being considered. In JPHGM-LMB, the likelihood is built based on group structure rather than the distance between predicted states and measurements. Consequently, the probability of each target associated with each measurement (joint association probabilities) can be obtained. Then, the structure information is integrated into LMB filter by revising each single target likelihood with joint association probabilities. However, because all undetected cases is considered, proposed approach is usable in real time only for a limited number of targets. Extensive simulations have demonstrated the significant performance improvement of our proposed method.
Style APA, Harvard, Vancouver, ISO itp.
9

Kosian, David A., i Leon A. Petrosyan. "Two-Level Cooperative Game on Hypergraph". Contributions to Game Theory and Management 14 (2021): 227–35. http://dx.doi.org/10.21638/11701/spbu31.2021.17.

Pełny tekst źródła
Streszczenie:
In the paper, the cooperative game with a hypergraph communication structure is considered. For this class of games, a new allocation rule was proposed by splitting the original game into a game between hyperlinks and games within them. The communication possibilities are described by the hypergraph in which the nodes are players and hyperlinks are the communicating subgroups of players. The game between hyperlinks and between players in each hyperlink is described. The payoff of each player is influenced by the actions of other players dependent on the distance between them on hypergraph. Constructed characteristic functions based on cooperative behaviour satisfy the convexity property. The results are shown by the example.
Style APA, Harvard, Vancouver, ISO itp.
10

Siriwong, Pinkaew, i Ratinan Boonklurb. "k-Zero-Divisor and Ideal-Based k-Zero-Divisor Hypergraphs of Some Commutative Rings". Symmetry 13, nr 11 (20.10.2021): 1980. http://dx.doi.org/10.3390/sym13111980.

Pełny tekst źródła
Streszczenie:
Let R be a commutative ring with nonzero identity and k≥2 be a fixed integer. The k-zero-divisor hypergraph Hk(R) of R consists of the vertex set Z(R,k), the set of all k-zero-divisors of R, and the hyperedges of the form {a1,a2,a3,…,ak}, where a1,a2,a3,…,ak are k distinct elements in Z(R,k), which means (i) a1a2a3⋯ak=0 and (ii) the products of all elements of any (k−1) subsets of {a1,a2,a3,…,ak} are nonzero. This paper provides two commutative rings so that one of them induces a family of complete k-zero-divisor hypergraphs, while another induces a family of k-partite σ-zero-divisor hypergraphs, which illustrates unbalanced or asymmetric structure. Moreover, the diameter and the minimum length of all cycles or girth of the family of k-partite σ-zero-divisor hypergraphs are determined. In addition to a k-zero-divisor hypergraph, we provide the definition of an ideal-based k-zero-divisor hypergraph and some basic results on these hypergraphs concerning a complete k-partite k-uniform hypergraph, a complete k-uniform hypergraph, and a clique.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Hypergraph-structure"

1

Datta, Sagnik. "Fully bayesian structure learning of bayesian networks and their hypergraph extensions". Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2283.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, j’aborde le problème important de l’estimation de la structure des réseaux complexes, à l’aide de la classe des modèles stochastiques dits réseaux Bayésiens. Les réseaux Bayésiens permettent de représenter l’ensemble des relations d’indépendance conditionnelle. L’apprentissage statistique de la structure de ces réseaux complexes par les réseaux Bayésiens peut révéler la structure causale sous-jacente. Il peut également servir pour la prédiction de quantités qui sont difficiles, coûteuses, ou non éthiques comme par exemple le calcul de la probabilité de survenance d’un cancer à partir de l’observation de quantités annexes, plus faciles à obtenir. Les contributions de ma thèse consistent en : (A) un logiciel développé en langage C pour l’apprentissage de la structure des réseaux bayésiens; (B) l’introduction d’un nouveau "jumping kernel" dans l’algorithme de "Metropolis-Hasting" pour un échantillonnage rapide de réseaux; (C) l’extension de la notion de réseaux Bayésiens aux structures incluant des boucles et (D) un logiciel spécifique pour l’apprentissage des structures cycliques. Notre principal objectif est l’apprentissage statistique de la structure de réseaux complexes représentée par un graphe et par conséquent notre objet d’intérêt est cette structure graphique. Un graphe est constitué de nœuds et d’arcs. Tous les paramètres apparaissant dans le modèle mathématique et différents de ceux qui caractérisent la structure graphique sont considérés comme des paramètres de nuisance
In this thesis, I address the important problem of the determination of the structure of complex networks, with the widely used class of Bayesian network models as a concrete vehicle of my ideas. The structure of a Bayesian network represents a set of conditional independence relations that hold in the domain. Learning the structure of the Bayesian network model that represents a domain can reveal insights into its underlying causal structure. Moreover, it can also be used for prediction of quantities that are difficult, expensive, or unethical to measure such as the probability of cancer based on other quantities that are easier to obtain. The contributions of this thesis include (A) a software developed in C language for structure learning of Bayesian networks; (B) introduction a new jumping kernel in the Metropolis-Hasting algorithm for faster sampling of networks (C) extending the notion of Bayesian networks to structures involving loops and (D) a software developed specifically to learn cyclic structures. Our primary objective is structure learning and thus the graph structure is our parameter of interest. We intend not to perform estimation of the parameters involved in the mathematical models
Style APA, Harvard, Vancouver, ISO itp.
2

Sharma, Govind. "Hypergraph Network Models: Learning, Prediction, and Representation in the Presence of Higher-Order Relations". Thesis, 2020. https://etd.iisc.ac.in/handle/2005/4781.

Pełny tekst źródła
Streszczenie:
The very thought about “relating” objects makes us assume the relation would be “pairwise”, and not of a “higher-order” — involving possibly more than two of them at a time. Yet in reality, higher-order relations do exist and are spread across multiple domains: medical science (e.g., coexisting diseases/symptoms), pharmacology (e.g., reacting chemicals), bibliometrics (e.g., collaborating researchers), the film industry (e.g., cast/crew), human resource (e.g., a team), social sciences (e.g., negotiating/conflicting nations), and so on. Since a collection of intersecting higher-order relations lose context when represented by a graph, “hypergraphs” – graph-like structures that allow edges (called “hyperedges”/“hyperlinks”) spanning possibly more than two nodes – capture them better. In a quest to better understand such relations, in this thesis we focus on solving a few network-science oriented problems involving hypergraphs. In the first of three broad parts, we study the behavior of usual graph-oriented networks that have an otherwise-ignored hypergraph underpinning. We particularly establish the skewness a hypergraph introduces into its induced graphs, and the effect of these biases on the structure and evaluation of the well-known problem of link prediction in networks. We find that an underlying hypergraph structure makes popular heuristics such as common-neighbors overestimate their ability to predict links. Gathering enough evidence – both theoretical and empirical – to support the need to reestablish the evaluations of link prediction algorithms on hypergraph-derived networks, we propose adjustments that essentially undo the undesired effects of hypergraphs in performance scores. Motivated by this observation, we extend graph-based structural node similarity measures to cater to hypergraphs (although still, for similarity between pairs of nodes). To be specific, we first establish mathematical transformations that could transfer any graph-structure-based notion of similarity between node pairs to a hypergraph-structure-based one. Using exhaustive combinations of the newly established scores with the existing ones, we could show improvements in the performance of both structural as well as temporal link prediction. For the second part of our thesis, we turn our attention towards a more central problem in hypergraphs — the “hyperlink/hyperedge prediction” problem. It simply refers to developing models to predict the occurrence of missing or future hyperedges. We first study the effect of “negative sampling (sampling the negative class) – an exercise performed due to the extreme intractability of the set of all non-hyperlinks, also known as the class imbalance problem – on hyperlink prediction, which has never been studied in the past. Since we observe hyperlink prediction algorithms performing differently under different negative sampling techniques, our experiments help the seemingly unimportant procedure gain some significance. Moreover, we contribute towards two benchmark negative sampling algorithms that would help standardize the step. Moving on from the negative sampling problem to predicting hyperlinks themselves, we work on two different approaches: a “clique-closure” based approach, and a “sub-higher-order” oriented one. While in the former, we develop and successfully test the clique-closure hypothesis – that hyperlinks mostly form from cliques or near-cliques – and are able to utilize it for hyperlink prediction via matrix completion (C3MM), the latter approach works on a novel information flow model in hypergraphs. More precisely, we introduce the concept of “sub-hyperedges” to capture the sub-higher-order notion in relations, and utilize an attention-based neural network model called SHONeN focusing on sub-hyperedges of a hyperedge. Owing to SHONeN’s computational complexity, we propose a sub-optimal heuristic that is able to perform better than its baselines on the downstream task of predicting hyperedges. The third and final part of our thesis is dedicated exclusively to “bipartite hypergraphs”: structures that are used to capture higher-order relations between two disjoint node sets, e.g., a patient’s diagnosis (possibly multiple diseases and symptoms), a movie project (multiple actors and crew members), etc. We first capture the structure of real-world such networks using “per-fixed” bipartite hypergraphs (those where the set of left and right hyperedges is fixed beforehand), and then focus on the “bipartite hyperlink prediction” problem. Since existing self-attention based approaches meant for usual hypergraphs do not work for bipartite hypergraphs — a fact that our experiments validate, we propose a “cross-attention” model for the same, and use the notion of set-matching over collections of sets to solve for bipartite hyperlink prediction. As a result, we develop a neural network architecture called CATSETMAT that performs way better than any of the other approaches meant to solve the bipartite hyperlink prediction problem. Last but not least, we also explain how, owing to an observation we call the “positive-negative dilemma”, existing state-of-the-art algorithms fail on bipartite hypergraphs.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Hypergraph-structure"

1

Dai, Qionghai, i Yue Gao. "Hypergraph Structure Evolution". W Artificial Intelligence: Foundations, Theory, and Algorithms, 101–20. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_6.

Pełny tekst źródła
Streszczenie:
AbstractIn practice, noise exists in the process of data collection and hypergraph construction. Therefore, missing, abundant, and noisy connections may be introduced into the generated hypergraph structure, which may lead to inaccurate inference on hypergraph. Another issue comes from the increasing data stream, which is also very common in many applications. It is important to consider the structure evolution methods on the hypergraph, which optimize the hypergraph structure accordingly. Early hypergraph computation methods mainly rely on static hypergraph structure, which may suffer from the limitation of the static mechanism when confronting random and increasing data scenarios. In this chapter, we introduce dynamic hypergraph structure evolution methods, including both hypergraph component optimization and hypergraph structure optimization. Finally, we briefly introduce the incremental learning method on growing data.
Style APA, Harvard, Vancouver, ISO itp.
2

Dai, Qionghai, i Yue Gao. "Hypergraph Computation Paradigms". W Artificial Intelligence: Foundations, Theory, and Algorithms, 41–47. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_3.

Pełny tekst źródła
Streszczenie:
AbstractThis chapter introduces three hypergraph computation paradigms, including intra-hypergraph computation, inter-hypergraph computation, and hypergraph structure computation. Intra-hypergraph computation representation aims to conduct representation learning of a hypergraph, where each subject is represented by a hypergraph of its components. Inter-hypergraph computation is to conduct representation learning of vertices in the hypergraph, where each subject is a vertex in the hypergraph. Hypergraph structure computation is to conduct hypergraph structure prediction, which aims to find the connections among vertices. This chapter is a general introduction of hypergraph computation paradigms to show how to formulate the task in the hypergraph computation framework.
Style APA, Harvard, Vancouver, ISO itp.
3

Dai, Qionghai, i Yue Gao. "Hypergraph Modeling". W Artificial Intelligence: Foundations, Theory, and Algorithms, 49–71. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_4.

Pełny tekst źródła
Streszczenie:
AbstractHypergraph modeling is the fundamental task in hypergraph computation, which targets on establishing a high-quality hypergraph structure to accurately formulate the high-order correlation among data. In this section, we introduce different hypergraph modeling methods to show how to build hypergraphs using various pieces of information, such as features, attributes, and/or graphs. These methods are organized into two broad categories, depending on whether these correlations are explicit or implicit, to distinguish the similarities and differences. We then further discuss different hypergraph structure optimization and generation methods, such as adaptive hypergraph modeling, generative hypergraph modeling, and knowledge hypergraph generation.
Style APA, Harvard, Vancouver, ISO itp.
4

Dai, Qionghai, i Yue Gao. "Neural Networks on Hypergraph". W Artificial Intelligence: Foundations, Theory, and Algorithms, 121–43. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_7.

Pełny tekst źródła
Streszczenie:
AbstractWith the development of deep learning on high-order correlations, hypergraph neural networks have received much attention in recent years. Generally, the neural networks on hypergraph can be divided into two categories, including the spectral-based methods and the spatial-based methods. For the spectral-based methods, the convolution operation is formulated in the spectral domain of graph, and we introduce the typical spectral-based methods, including hypergraph neural networks (HGNN), hypergraph convolution with attention (Hyper-Atten), and hyperbolic hypergraph neural network (HHGNN), which extend hypergraph computation to hyperbolic spaces beyond the Euclidean space. For the spatial-based methods, the convolution operation is defined in groups of spatially close vertices. We then present spatial-based hypergraph neural networks of the general hypergraph neural networks (HGNN+) and the dynamic hypergraph neural networks (DHGNN). Additionally, there are several convolution methods that attempt to reduce the hypergraph structure to the graph structure, so that the existing graph convolution methods can be directly deployed. Lastly, we analyze the association and comparison between hypergraph and graph in the two areas described above (spectral-based, spatial-based), further demonstrating the ability and advantages of hypergraph on constructing and computing higher-order correlations in the data.
Style APA, Harvard, Vancouver, ISO itp.
5

Dai, Qionghai, i Yue Gao. "Mathematical Foundations of Hypergraph". W Artificial Intelligence: Foundations, Theory, and Algorithms, 19–40. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_2.

Pełny tekst źródła
Streszczenie:
AbstractIn this chapter, we introduce the mathematical foundations of hypergraph and present the mathematical notations that are used to facilitate deep understanding and analysis of hypergraph structure. A hypergraph is composed of a set of vertices and hyperedges, and it is a generalization of a graph, where a weighted hypergraph quantifies the relative importance of hyperedges or vertices. Hypergraph can also be divided into two main categories, i.e., the undirected hypergraph representation and the directed hypergraph representation. The latter one further divides the vertices in one hyperedge into the source vertex set and the target vertex set to model more complex correlations. Additionally, we discuss the relationship between hypergraph and graph from the perspective of structural transformation and expressive ability. The most intuitive difference between a simple graph and a hypergraph can be observed in the size of order and expression of adjacency. A hypergraph can be converted into a simple graph using clique expansion, star expansion, and line expansion. Moreover, the proof based on random walks and Markov chains establishes the relationship between hypergraphs with edge-independent vertex weights and weighted graphs.
Style APA, Harvard, Vancouver, ISO itp.
6

Dai, Qionghai, i Yue Gao. "The DeepHypergraph Library". W Artificial Intelligence: Foundations, Theory, and Algorithms, 237–40. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_12.

Pełny tekst źródła
Streszczenie:
AbstractThis chapter introduces the DeepHypergraph library, which bridges the hypergraph theory and hypergraph applications. This library provides the generation of multiple low-order structures (such as graph and directed graph), high-order structures (such as hypergraph and directed hypergraph), datasets, operations, learning methods, visualizations, etc. We first introduce the design motivation and the overall architecture of the library. Then, we introduce the “correlation structure” and “function library” of the Deephypergraph library, respectively.
Style APA, Harvard, Vancouver, ISO itp.
7

Dai, Qionghai, i Yue Gao. "Typical Hypergraph Computation Tasks". W Artificial Intelligence: Foundations, Theory, and Algorithms, 73–99. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_5.

Pełny tekst źródła
Streszczenie:
AbstractAfter hypergraph structure generation for the data, the next step is how to conduct data analysis on the hypergraph. In this chapter, we introduce four typical hypergraph computation tasks, including label propagation, data clustering, imbalance learning, and link prediction. The first typical task is label propagation, which is to predict the labels for the vertices, i.e., assigning a label to each unlabeled vertex in the hypergraph, based on the labeled information. In general cases, label propagation is to propagate the label information from labeled vertices to unlabeled vertices through structural information of the hyperedges. In this part, we discuss the hypergraph cut on hypergraphs and random walk interpretation of label propagation on hypergraphs. The second typical task is data clustering, which is formulated as dividing the vertices into several parts in a hypergraph. In this part, we introduce a hypergraph Laplacian smoothing filter and an embedded model for hypergraph clustering tasks. The third typical task is cost-sensitive learning, which targets on learning with different mis-classification costs. The fourth typical task is link prediction, which aims to discover missing relations or predict new coming hyperedges based on the observed hypergraph.
Style APA, Harvard, Vancouver, ISO itp.
8

Dai, Qionghai, i Yue Gao. "Hypergraph Computation for Medical and Biological Applications". W Artificial Intelligence: Foundations, Theory, and Algorithms, 191–221. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_10.

Pełny tekst źródła
Streszczenie:
AbstractHypergraph computation, with its superior capability in complex data modeling, is a powerful tool for many medical and biological applications. In this chapter, we introduce four typical examples of the use of hypergraph computation in medical and biological applications, i.e., computer-aided diagnosis, survival prediction with histopathological images, drug discovery, and medical image segmentation. In each application, we present how to construct the hypergraph structure with different kinds of medical and biological data and different hypergraph computation strategies for these tasks respectively. We can notice that hypergraph computation has shown advantages in these applications.
Style APA, Harvard, Vancouver, ISO itp.
9

Kosian, David A., i Leon A. Petrosyan. "New Characteristic Function for Cooperative Games with Hypergraph Communication Structure". W Static & Dynamic Game Theory: Foundations & Applications, 87–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-51941-4_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Munshi, Shiladitya, Ayan Chakraborty i Debajyoti Mukhopadhyay. "Constraint Driven Stratification of RDF with Hypergraph Graph (HG(2)) Data Structure". W Communications in Computer and Information Science, 167–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36321-4_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Hypergraph-structure"

1

Cai, Derun, Moxian Song, Chenxi Sun, Baofeng Zhang, Shenda Hong i Hongyan Li. "Hypergraph Structure Learning for Hypergraph Neural Networks". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/267.

Pełny tekst źródła
Streszczenie:
Hypergraphs are natural and expressive modeling tools to encode high-order relationships among entities. Several variations of Hypergraph Neural Networks (HGNNs) are proposed to learn the node representations and complex relationships in the hypergraphs. Most current approaches assume that the input hypergraph structure accurately depicts the relations in the hypergraphs. However, the input hypergraph structure inevitably contains noise, task-irrelevant information, or false-negative connections. Treating the input hypergraph structure as ground-truth information unavoidably leads to sub-optimal performance. In this paper, we propose a Hypergraph Structure Learning (HSL) framework, which optimizes the hypergraph structure and the HGNNs simultaneously in an end-to-end way. HSL learns an informative and concise hypergraph structure that is optimized for downstream tasks. To efficiently learn the hypergraph structure, HSL adopts a two-stage sampling process: hyperedge sampling for pruning redundant hyperedges and incident node sampling for pruning irrelevant incident nodes and discovering potential implicit connections. The consistency between the optimized structure and the original structure is maintained by the intra-hyperedge contrastive learning module. The sampling processes are jointly optimized with HGNNs towards the objective of the downstream tasks. Experiments conducted on 7 datasets show shat HSL outperforms the state-of-the-art baselines while adaptively sparsifying hypergraph structures.
Style APA, Harvard, Vancouver, ISO itp.
2

Zhang, Zizhao, Haojie Lin i Yue Gao. "Dynamic Hypergraph Structure Learning". W Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/439.

Pełny tekst źródła
Streszczenie:
In recent years, hypergraph modeling has shown its superiority on correlation formulation among samples and has wide applications in classification, retrieval, and other tasks. In all these works, the performance of hypergraph learning highly depends on the generated hypergraph structure. A good hypergraph structure can represent the data correlation better, and vice versa. Although hypergraph learning has attracted much attention recently, most of existing works still rely on a static hypergraph structure, and little effort concentrates on optimizing the hypergraph structure during the learning process. To tackle this problem, we propose a dynamic hypergraph structure learning method in this paper. In this method, given the originally generated hypergraph structure, the objective of our work is to simultaneously optimize the label projection matrix (the common task in hypergraph learning) and the hypergraph structure itself. More specifically, in this formulation, the label projection matrix is related to the hypergraph structure, and the hypergraph structure is associated with the data correlation from both the label space and the feature space. Here, we alternatively learn the optimal label projection matrix and the hypergraph structure, leading to a dynamic hypergraph structure during the learning process. We have applied the proposed method in the tasks of 3D shape recognition and gesture recognition. Experimental results on 4 public datasets show better performance compared with the state-of-the-art methods. We note that the proposed method can be further applied in other tasks.
Style APA, Harvard, Vancouver, ISO itp.
3

Chang, Hyung Jin, Tobias Fischer, Maxime Petit, Martina Zambelli i Yiannis Demiris. "Kinematic Structure Correspondences via Hypergraph Matching". W 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.457.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Jiang, Jianwen, Yuxuan Wei, Yifan Feng, Jingxuan Cao i Yue Gao. "Dynamic Hypergraph Neural Networks". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/366.

Pełny tekst źródła
Streszczenie:
In recent years, graph/hypergraph-based deep learning methods have attracted much attention from researchers. These deep learning methods take graph/hypergraph structure as prior knowledge in the model. However, hidden and important relations are not directly represented in the inherent structure. To tackle this issue, we propose a dynamic hypergraph neural networks framework (DHGNN), which is composed of the stacked layers of two modules: dynamic hypergraph construction (DHG) and hypergrpah convolution (HGC). Considering initially constructed hypergraph is probably not a suitable representation for data, the DHG module dynamically updates hypergraph structure on each layer. Then hypergraph convolution is introduced to encode high-order data relations in a hypergraph structure. The HGC module includes two phases: vertex convolution and hyperedge convolution, which are designed to aggregate feature among vertices and hyperedges, respectively. We have evaluated our method on standard datasets, the Cora citation network and Microblog dataset. Our method outperforms state-of-the-art methods. More experiments are conducted to demonstrate the effectiveness and robustness of our method to diverse data distributions.
Style APA, Harvard, Vancouver, ISO itp.
5

Zhou, Peng, Zongqian Wu, Xiangxiang Zeng, Guoqiu Wen, Junbo Ma i Xiaofeng Zhu. "Totally Dynamic Hypergraph Neural Networks". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/275.

Pełny tekst źródła
Streszczenie:
Recent dynamic hypergraph neural networks (DHGNNs) are designed to adaptively optimize the hypergraph structure to avoid the dependence on the initial hypergraph structure, thus capturing more hidden information for representation learning. However, most existing DHGNNs cannot adjust the hyperedge number and thus fail to fully explore the underlying hypergraph structure. This paper proposes a new method, namely, totally hypergraph neural network (TDHNN), to adjust the hyperedge number for optimizing the hypergraph structure. Specifically, the proposed method first captures hyperedge feature distribution to obtain dynamical hyperedge features rather than fixed ones, by conducting the sampling from the learned distribution. The hypergraph is then constructed based on the attention coefficients of both sampled hyperedges and nodes. The node features are dynamically updated by designing a simple hypergraph convolution algorithm. Experimental results on real datasets demonstrate the effectiveness of the proposed method, compared to SOTA methods. The source code can be accessed via https://github.com/HHW-zhou/TDHNN.
Style APA, Harvard, Vancouver, ISO itp.
6

Kok, Stanley, i Pedro Domingos. "Learning Markov logic network structure via hypergraph lifting". W the 26th Annual International Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1553374.1553440.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Munshi, Shiladitya, Ayan Chakraborty i Debajyoti Mukhopadhyay. "Theories of Hypergraph-Graph (HG(2)) Data Structure". W 2013 International Conference on Cloud & Ubiquitous Computing & Emerging Technologies (CUBE). IEEE, 2013. http://dx.doi.org/10.1109/cube.2013.45.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Li, Shengkun, Dawei Du, Longyin Wen, Ming-Ching Chang i Siwei Lyu. "Hybrid structure hypergraph for online deformable object tracking". W 2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2017. http://dx.doi.org/10.1109/icip.2017.8296457.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Zhao, Yusheng, Xiao Luo, Wei Ju, Chong Chen, Xian-Sheng Hua i Ming Zhang. "Dynamic Hypergraph Structure Learning for Traffic Flow Forecasting". W 2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2023. http://dx.doi.org/10.1109/icde55515.2023.00178.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Su, Lifan, Yue Gao, Xibin Zhao, Hai Wan, Ming Gu i Jiaguang Sun. "Vertex-Weighted Hypergraph Learning for Multi-View Object Classification". W Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/387.

Pełny tekst źródła
Streszczenie:
3D object classification with multi-view representation has become very popular, thanks to the progress on computer techniques and graphic hardware, and attracted much research attention in recent years. Regarding this task, there are mainly two challenging issues, i.e., the complex correlation among multiple views and the possible imbalance data issue. In this work, we propose to employ the hypergraph structure to formulate the relationship among 3D objects, taking the advantage of hypergraph on high-order correlation modelling. However, traditional hypergraph learning method may suffer from the imbalance data issue. To this end, we propose a vertex-weighted hypergraph learning algorithm for multi-view 3D object classification, introducing an updated hypergraph structure. In our method, the correlation among different objects is formulated in a hypergraph structure and each object (vertex) is associated with a corresponding weight, weighting the importance of each sample in the learning process. The learning process is conducted on the vertex-weighted hypergraph and the estimated object relevance is employed for object classification. The proposed method has been evaluated on two public benchmarks, i.e., the NTU and the PSB datasets. Experimental results and comparison with the state-of-the-art methods and recent deep learning method demonstrate the effectiveness of our proposed method.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii