Academic literature on the topic 'High-dimensional sparse graph'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'High-dimensional sparse graph.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "High-dimensional sparse graph"

1

Xie, Anze, Anders Carlsson, Jason Mohoney, Roger Waleffe, Shanan Peters, Theodoros Rekatsinas, and Shivaram Venkataraman. "Demo of marius." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2759–62. http://dx.doi.org/10.14778/3476311.3476338.

Full text
Abstract:
Graph embeddings have emerged as the de facto representation for modern machine learning over graph data structures. The goal of graph embedding models is to convert high-dimensional sparse graphs into low-dimensional, dense and continuous vector spaces that preserve the graph structure properties. However, learning a graph embedding model is a resource intensive process, and existing solutions rely on expensive distributed computation to scale training to instances that do not fit in GPU memory. This demonstration showcases Marius: a new open-source engine for learning graph embedding models over billion-edge graphs on a single machine. Marius is built around a recently-introduced architecture for machine learning over graphs that utilizes pipelining and a novel data replacement policy to maximize GPU utilization and exploit the entire memory hierarchy (including disk, CPU, and GPU memory) to scale to large instances. The audience will experience how to develop, train, and deploy graph embedding models using Marius' configuration-driven programming model. Moreover, the audience will have the opportunity to explore Marius' deployments on applications including link-prediction on WikiKG90M and reasoning queries on a paleobiology knowledge graph. Marius is available as open source software at https://marius-project.org.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Jianyu, Guan Yu, and Yufeng Liu. "Graph-based sparse linear discriminant analysis for high-dimensional classification." Journal of Multivariate Analysis 171 (May 2019): 250–69. http://dx.doi.org/10.1016/j.jmva.2018.12.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Li-e., and Xianxian Li. "A Clustering-Based Bipartite Graph Privacy-Preserving Approach for Sharing High-Dimensional Data." International Journal of Software Engineering and Knowledge Engineering 24, no. 07 (September 2014): 1091–111. http://dx.doi.org/10.1142/s0218194014500363.

Full text
Abstract:
Driven by mutual benefits, there is a demand for transactional data sharing among organizations or parties for research or business analysis purpose. It becomes an essential concern to provide privacy-preserving data sharing and meanwhile maintain data utility, due to the fact that transactional data may contain sensitive personal information. Existing privacy-preserving methods, such as k-anonymity and l-diversity, cannot handle high-dimensional sparse data well, since they would bring about much data distortion in the anonymization process. In this paper, we use bipartite graphs with node attributes to model high-dimensional sparse data, and then propose a privacy-preserving approach for sharing transactional data in a new vision, in which the bipartite graph is anonymized into a weighted bipartite graph by clustering node attributes. Our approach can maintain privacy of the associations between entities and resist certain attackers with knowledge of partial items. Experiments have been performed on real-life data sets to measure the information loss and the accuracy of answering aggregate queries. Experimental results show that the approach improves the balance of performance between privacy protection and data utility.
APA, Harvard, Vancouver, ISO, and other styles
4

Saul, Lawrence K. "A tractable latent variable model for nonlinear dimensionality reduction." Proceedings of the National Academy of Sciences 117, no. 27 (June 22, 2020): 15403–8. http://dx.doi.org/10.1073/pnas.1916012117.

Full text
Abstract:
We propose a latent variable model to discover faithful low-dimensional representations of high-dimensional data. The model computes a low-dimensional embedding that aims to preserve neighborhood relationships encoded by a sparse graph. The model both leverages and extends current leading approaches to this problem. Like t-distributed Stochastic Neighborhood Embedding, the model can produce two- and three-dimensional embeddings for visualization, but it can also learn higher-dimensional embeddings for other uses. Like LargeVis and Uniform Manifold Approximation and Projection, the model produces embeddings by balancing two goals—pulling nearby examples closer together and pushing distant examples further apart. Unlike these approaches, however, the latent variables in our model provide additional structure that can be exploited for learning. We derive an Expectation–Maximization procedure with closed-form updates that monotonically improve the model’s likelihood: In this procedure, embeddings are iteratively adapted by solving sparse, diagonally dominant systems of linear equations that arise from a discrete graph Laplacian. For large problems, we also develop an approximate coarse-graining procedure that avoids the need for negative sampling of nonadjacent nodes in the graph. We demonstrate the model’s effectiveness on datasets of images and text.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Xinyu, Xiaoguang Gao, and Chenfeng Wang. "A Novel BN Learning Algorithm Based on Block Learning Strategy." Sensors 20, no. 21 (November 7, 2020): 6357. http://dx.doi.org/10.3390/s20216357.

Full text
Abstract:
Learning accurate Bayesian Network (BN) structures of high-dimensional and sparse data is difficult because of high computation complexity. To learn the accurate structure for high-dimensional and sparse data faster, this paper adopts a divide and conquer strategy and proposes a block learning algorithm with a mutual information based K-means algorithm (BLMKM algorithm). This method utilizes an improved K-means algorithm to block the nodes in BN and a maximum minimum parents and children (MMPC) algorithm to obtain the whole skeleton of BN and find possible graph structures based on separated blocks. Then, a pruned dynamic programming algorithm is performed sequentially for all possible graph structures to get possible BNs and find the best BN by scoring function. Experiments show that for high-dimensional and sparse data, the BLMKM algorithm can achieve the same accuracy in a reasonable time compared with non-blocking classical learning algorithms. Compared to the existing block learning algorithms, the BLMKM algorithm has a time advantage on the basis of ensuring accuracy. The analysis of the real radar effect mechanism dataset proves that BLMKM algorithm can quickly establish a global and accurate causality model to find the cause of interference, predict the detecting result, and guide the parameters optimization. BLMKM algorithm is efficient for BN learning and has practical application value.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Ying, Xiaojun Xu, and Jianbo Li. "High-Dimensional Sparse Graph Estimation by Integrating DTW-D Into Bayesian Gaussian Graphical Models." IEEE Access 6 (2018): 34279–87. http://dx.doi.org/10.1109/access.2018.2849213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dobson, Andrew, and Kostas Bekris. "Improved Heuristic Search for Sparse Motion Planning Data Structures." Proceedings of the International Symposium on Combinatorial Search 5, no. 1 (September 1, 2021): 196–97. http://dx.doi.org/10.1609/socs.v5i1.18334.

Full text
Abstract:
Sampling-based methods provide efficient, flexible solutions for motion planning, even for complex, high-dimensional systems. Asymptotically optimal planners ensure convergence to the optimal solution, but produce dense structures. This work shows how to extend sparse methods achieving asymptotic near-optimality using multiple-goal heuristic search during graph constuction. The resulting method produces identical output to the existing Incremental Roadmap Spanner approach but in an order of magnitude less time.
APA, Harvard, Vancouver, ISO, and other styles
8

Kefato, Zekarias, and Sarunas Girdzijauskas. "Gossip and Attend: Context-Sensitive Graph Representation Learning." Proceedings of the International AAAI Conference on Web and Social Media 14 (May 26, 2020): 351–59. http://dx.doi.org/10.1609/icwsm.v14i1.7305.

Full text
Abstract:
Graph representation learning (GRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and often sparse graphs. Most studies explore the structure and metadata associated with the graph using random walks and employ an unsupervised or semi-supervised learning schemes. Learning in these methods is context-free, resulting in only a single representation per node. Recently studies have argued on the adequacy of a single representation and proposed context-sensitive approaches, which are capable of extracting multiple node representations for different contexts. This proved to be highly effective in applications such as link prediction and ranking.However, most of these methods rely on additional textual features that require complex and expensive RNNs or CNNs to capture high-level features or rely on a community detection algorithm to identify multiple contexts of a node.In this study we show that in-order to extract high-quality context-sensitive node representations it is not needed to rely on supplementary node features, nor to employ computationally heavy and complex models. We propose Goat, a context-sensitive algorithm inspired by gossip communication and a mutual attention mechanism simply over the structure of the graph. We show the efficacy of Goat using 6 real-world datasets on link prediction and node clustering tasks and compare it against 12 popular and state-of-the-art (SOTA) baselines. Goat consistently outperforms them and achieves up to 12% and 19% gain over the best performing methods on link prediction and clustering tasks, respectively.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Pei Heng, Taeho Lee, and Hee Yong Youn. "Dimensionality Reduction with Sparse Locality for Principal Component Analysis." Mathematical Problems in Engineering 2020 (May 20, 2020): 1–12. http://dx.doi.org/10.1155/2020/9723279.

Full text
Abstract:
Various dimensionality reduction (DR) schemes have been developed for projecting high-dimensional data into low-dimensional representation. The existing schemes usually preserve either only the global structure or local structure of the original data, but not both. To resolve this issue, a scheme called sparse locality for principal component analysis (SLPCA) is proposed. In order to effectively consider the trade-off between the complexity and efficiency, a robust L2,p-norm-based principal component analysis (R2P-PCA) is introduced for global DR, while sparse representation-based locality preserving projection (SR-LPP) is used for local DR. Sparse representation is also employed to construct the weighted matrix of the samples. Being parameter-free, this allows the construction of an intrinsic graph more robust against the noise. In addition, simultaneous learning of projection matrix and sparse similarity matrix is possible. Experimental results demonstrate that the proposed scheme consistently outperforms the existing schemes in terms of clustering accuracy and data reconstruction error.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Dongming, Mingshuo Nie, Hupo Zhang, Zhen Wang, and Dongqi Wang. "Network Embedding Algorithm Taking in Variational Graph AutoEncoder." Mathematics 10, no. 3 (February 2, 2022): 485. http://dx.doi.org/10.3390/math10030485.

Full text
Abstract:
Complex networks with node attribute information are employed to represent complex relationships between objects. Research of attributed network embedding fuses the topology and the node attribute information of the attributed network in the common latent representation space, to encode the high-dimensional sparse network information to the low-dimensional dense vector representation, effectively improving the performance of the network analysis tasks. The current research on attributed network embedding is presently facing problems of high-dimensional sparsity of attribute eigenmatrix and underutilization of attribute information. In this paper, we propose a network embedding algorithm taking in a variational graph autoencoder (NEAT-VGA). This algorithm first pre-processes the attribute features, i.e., the attribute feature learning of the network nodes. Then, the feature learning matrix and the adjacency matrix of the network are fed into the variational graph autoencoder algorithm to obtain the Gaussian distribution of the potential vectors, which more easily generate high-quality node embedding representation vectors. Then, the embedding of the nodes obtained by sampling this Gaussian distribution is reconstructed with structural and attribute losses. The loss function is minimized by iterative training until the low-dimension vector representation, containing network structure information and attribute information of nodes, can be better obtained, and the performance of the algorithm is evaluated by link prediction experimental results.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "High-dimensional sparse graph"

1

ARTARIA, ANDREA. "Objective Bayesian Analysis for Differential Gaussian Directed Acyclic Graphs." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/55327.

Full text
Abstract:
Often we are confronted with heterogeneous multivariate data, i.e., data coming from several categories, and the interest may center on the differential structure of stochastic dependence among the variables between the groups. The focus in this work is on the two groups problem and is faced modeling the system through a Gaussian directed acyclic graph (DAG) couple linked in a fashion to obtain a joint estimation in order to exploit, whenever they exist, similarities between the graphs. The model can be viewed as a set of separate regressions and the proposal consists in assigning a non-local prior to the regression coefficients with the objective of enforcing stronger sparsity constraints on model selection. The model selection is based on Moment Fractional Bayes Factor, and is performed through a stochastic search algorithm over the space of DAG models.
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Ning. "Accurate variable selection and causal structure recovery in high-dimensional data." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/22920.

Full text
Abstract:
From the perspective of econometrics, an accurate variable selection method greatly enhances the reliability of causal analysis and interpretation of the estimators, espe- cially in a world of ever-expanding data dimensions. While variable selection methods in machine learning and statistics have been developed rapidly and applied widely in different branches of data science in the last decade, they have been more slowly adopted in econometrics. Nevertheless, the machine learning methods, including lasso, forward regression, cross-validation and marginal correlation ranking (also called vari- able screening) are subject to a range of issues that may result in errors in variable selection and inaccurate causal interpretation. I propose two new variable-selection methods that significantly mitigate the issues with existing techniques and that provide accurate variable selection and reliable causal structure estimation in high-dimensional data. In Chapter 1, I develop bounds for cross-validation errors that may be used as a criterion for variable selection with many existing learning algorithms (including lasso, forward regression and variable screen- ing), yielding a sparse and stable model that retains all of the relevant variables. In Chapter 2, I develop an entirely new learning algorithm for variable selection— subsample-ordered least-angle regression (solar)—and show in simulations that solar out-performs coordinate descent and lars-lasso in terms of the sparsity, stability, ac- curacy, and robustness of variable selection. In Chapter 3 I demonstrate the superior variable-selection performance of solar using real-world data from two completely dif- ferent samples: prostate cancer patients and house prices. I also show that combining solar variable selection with linear probabilistic graph learning yields a plausible, data- driven method to recover causal structure in data.
APA, Harvard, Vancouver, ISO, and other styles
3

Jalali, Ali 1982. "Dirty statistical models." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5088.

Full text
Abstract:
In fields across science and engineering, we are increasingly faced with problems where the number of variables or features we need to estimate is much larger than the number of observations. Under such high-dimensional scaling, for any hope of statistically consistent estimation, it becomes vital to leverage any potential structure in the problem such as sparsity, low-rank structure or block sparsity. However, data may deviate significantly from any one such statistical model. The motivation of this thesis is: can we simultaneously leverage more than one such statistical structural model, to obtain consistency in a larger number of problems, and with fewer samples, than can be obtained by single models? Our approach involves combining via simple linear superposition, a technique we term dirty models. The idea is very simple: while any one structure might not capture the data, a superposition of structural classes might. Dirty models thus searches for a parameter that can be decomposed into a number of simpler structures such as (a) sparse plus block-sparse, (b) sparse plus low-rank and (c) low-rank plus block-sparse. In this thesis, we propose dirty model based algorithms for different problems such as multi-task learning, graph clustering and time-series analysis with latent factors. We analyze these algorithms in terms of the number of observations we need to estimate the variables. These algorithms are based on convex optimization and sometimes they are relatively slow. We provide a class of low-complexity greedy algorithms that not only can solve these optimizations faster, but also guarantee the solution. Other than theoretical results, in each case, we provide experimental results to illustrate the power of dirty models.
text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "High-dimensional sparse graph"

1

Skillicorn, David B. "Representation by Graphs." In Understanding High-Dimensional Spaces, 67–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33398-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

O’ Mahony, Niall, Anshul Awasthi, Joseph Walsh, and Daniel Riordan. "Latent Space Cartography for Geometrically Enriched Latent Spaces." In Communications in Computer and Information Science, 488–501. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26438-2_38.

Full text
Abstract:
AbstractThere have been many developments in recent years on the exploitation of non-Euclidean geometry for the better representation of the relation between subgroups in datasets. Great progress has been made in this field of Disentangled Representation Learning, in leveraging information geometry divergence, manifold regularisation and geodesics to allow complex dynamics to be captured in the latent space of the representations produced. However, interpreting the high-dimensional latent spaces of the modern deep learning-based models involved is non-trivial. Therefore, in this paper, we investigate how techniques in Latent Space Cartography can be used to display abstract and representational 2D visualisations of manifolds.Additionally, we present a multi-task metric learning model to capture in its output representations as many metrics as is available in a multi-faceted fine-grained change detection dataset. We also implement an interactive visualisation tool that utilises cartographic techniques that allow dimensions and annotations of graphs to be representative of the underlying factors affecting individual scenarios the user can morph and transform to focus on an individual/sub-group to see how they are performing with respect to said metrics.
APA, Harvard, Vancouver, ISO, and other styles
3

Mateus, Diana, Christian Wachinger, Selen Atasoy, Loren Schwarz, and Nassir Navab. "Learning Manifolds." In Machine Learning in Computer-Aided Diagnosis, 374–402. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0059-1.ch018.

Full text
Abstract:
Computer aided diagnosis is often confronted with processing and analyzing high dimensional data. One alternative to deal with such data is dimensionality reduction. This chapter focuses on manifold learning methods to create low dimensional data representations adapted to a given application. From pairwise non-linear relations between neighboring data-points, manifold learning algorithms first approximate the low dimensional manifold where data lives with a graph; then, they find a non-linear map to embed this graph into a low dimensional space. Since the explicit pairwise relations and the neighborhood system can be designed according to the application, manifold learning methods are very flexible and allow easy incorporation of domain knowledge. The authors describe different assumptions and design elements that are crucial to building successful low dimensional data representations with manifold learning for a variety of applications. In particular, they discuss examples for visualization, clustering, classification, registration, and human-motion modeling.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "High-dimensional sparse graph"

1

Wu, Di, Gang Lu, and Zhicheng Xu. "Robust and Accurate Representation Learning for High-dimensional and Sparse Matrices in Recommender Systems." In 2020 IEEE International Conference on Knowledge Graph (ICKG). IEEE, 2020. http://dx.doi.org/10.1109/icbk50248.2020.00075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Jiaqi, Meng Wang, Qinchi Li, Sen Wang, Xiaojun Chang, and Beilun Wang. "Quadratic Sparse Gaussian Graphical Model Estimation Method for Massive Variables." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/410.

Full text
Abstract:
We consider the problem of estimating a sparse Gaussian Graphical Model with a special graph topological structure and more than a million variables. Most previous scalable estimators still contain expensive calculation steps (e.g., matrix inversion or Hessian matrix calculation) and become infeasible in high-dimensional scenarios, where p (number of variables) is larger than n (number of samples). To overcome this challenge, we propose a novel method, called Fast and Scalable Inverse Covariance Estimator by Thresholding (FST). FST first obtains a graph structure by applying a generalized threshold to the sample covariance matrix. Then, it solves multiple block-wise subproblems via element-wise thresholding. By using matrix thresholding instead of matrix inversion as the computational bottleneck, FST reduces its computational complexity to a much lower order of magnitude (O(p2)). We show that FST obtains the same sharp convergence rate O(√(log max{p, n}/n) as other state-of-the-art methods. We validate the method empirically, on multiple simulated datasets and one real-world dataset, and show that FST is two times faster than the four baselines while achieving a lower error rate under both Frobenius-norm and max-norm.
APA, Harvard, Vancouver, ISO, and other styles
3

Ilinca, Florin, Jean-François Hétu, Martin Audet, and Randall Bramley. "Simulation of 3-D Mold-Filling and Solidification Processes on Distributed Memory Parallel Architectures." In ASME 1997 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/imece1997-0805.

Full text
Abstract:
Abstract This work presents industrial mold-filling applications of a three-dimensional stabilized finite element solver on distributed memory parallel architectures. The paper focuses on the solution algorithm and parallel implementation for complex multiphasics problems involving high Reynolds number flows with free surfaces, turbulence modeling and heat transfer. Standard domain decomposition methods (Chaco, Metis) are applied to the graph of nodes obtained from the finite element mesh, and a distributed-memory MPI programming model is used. An implicit time integration scheme and a segregated iterative algorithm are used to solve the momentum, energy, turbulence variables and front tracking equations. The equations are dis-cretized using a stabilized SUPG finite element method on linear elements. The resulting sparse system of linear equations is solved using parallel preconditioned iterative methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Yong Hoon, R. E. Corman, Randy H. Ewoldt, and James T. Allison. "A Multiobjective Adaptive Surrogate Modeling-Based Optimization (MO-ASMO) Framework Using Efficient Sampling Strategies." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67541.

Full text
Abstract:
A novel multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) framework is proposed to utilize a minimal number of training samples efficiently for sequential model updates. All the sample points are enforced to be feasible, and to provide coverage of sparsely explored sparse design regions using a new optimization subproblem. The MO-ASMO method only evaluates high-fidelity functions at feasible sample points. During an exploitation sample phase, samples are selected to enhance solution accuracy rather than the global exploration. Sampling tasks are especially challenging for multiobjective optimization; for an n-dimensional design space, a strategy is required for generating model update sample points near an (n − 1)-dimensional hypersurface corresponding to the Pareto set in the design space. This is addressed here using a force-directed layout algorithm, adapted from graph visualization strategies, to distribute feasible sample points evenly near the estimated Pareto set. Model validation samples are chosen uniformly on the Pareto set hypersurface, and surrogate model estimates at these points are compared to high-fidelity model responses. All high-fidelity model evaluations are stored for later use to train an updated surrogate model. The MO-ASMO algorithm, along with the set of new sampling strategies, are tested using two mathematical and one realistic engineering problems. The second mathematical test problems is specifically designed to test the limits of this algorithm to cope with very narrow, non-convex feasible domains. It involves oscillatory objective functions, giving rise to a discontinuous set of Pareto-optimal solutions. Also, the third test problem demonstrates that the MO-ASMO algorithm can handle a practical engineering problem with more than 10 design variables and black-box simulations. The efficiency of the MO-ASMO algorithm is demonstrated by comparing the result of two mathematical problems to the results of the NSGA-II algorithm in terms of the number of high fidelity function evaluations, and is shown to reduce total function evaluations by several orders of magnitude when converging to the same Pareto sets.
APA, Harvard, Vancouver, ISO, and other styles
5

Morris, Clinton, and Carolyn C. Seepersad. "Identification of High Performance Regions of High-Dimensional Design Spaces With Materials Design Applications." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67769.

Full text
Abstract:
Design exploration methods seek to identify sets of candidate designs or regions of the design space that yield desirable performance. Commonly, the dimensionality of the design space exceeds the limited dimensions supported by standard graphical techniques, making it difficult for human designers to visualize or understand the underlying structure of the design space. With standard visualization tools, it is sometimes challenging to visualize a multi-dimensional Pareto frontier, but it is even more difficult to visualize the collections of design (input) variable values that yield those Pareto solutions. It is difficult for a designer to determine not only how many distinct regions of the design (input) space may offer desirable performance but also how those design spaces are structured. In this paper, a form of spectral clustering known as ε-neighborhood clustering is proposed for identifying satisfactory regions in the design spaces of multilevel problems. By exploiting properties of graph theory, the number of satisfactory design regions can be determined accurately and efficiently, and the design space can be partitioned. The method is demonstrated to be effective at identifying clusters in a 10 dimensional space. It is also applied to a multilevel materials design problem to demonstrate its efficacy on a realistic design application. Future work intends to visualize each individually identified design region to produce an intuitive mapping of the design space.
APA, Harvard, Vancouver, ISO, and other styles
6

Hu, Binbin, Zhengwei Wu, Jun Zhou, Ziqi Liu, Zhigang Huangfu, Zhiqiang Zhang, and Chaochao Chen. "MERIT: Learning Multi-level Representations on Temporal Graphs." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/288.

Full text
Abstract:
Recently, representation learning on temporal graphs has drawn increasing attention, which aims at learning temporal patterns to characterize the evolving nature of dynamic graphs in real-world applications. Despite effectiveness, these methods commonly ignore the individual- and combinatorial-level patterns derived from different types of interactions (e.g.,user-item), which are at the heart of the representation learning on temporal graphs. To fill this gap, we propose MERIT, a novel multi-level graph attention network for inductive representation learning on temporal graphs.We adaptively embed the original timestamps to a higher, continuous dimensional space for learn-ing individual-level periodicity through Personalized Time Encoding (PTE) module. Furthermore, we equip MERIT with Continuous time and Con-text aware Attention (Coco-Attention) mechanism which chronologically locates most relevant neighbors by jointly capturing multi-level context on temporal graphs. Finally, MERIT performs multiple aggregations and propagations to explore and exploit high-order structural information for down-stream tasks. Extensive experiments on four public datasets demonstrate the effectiveness of MERITon both (inductive / transductive) link prediction and node classification task.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Xiaofeng, Cong Lei, Hao Yu, Yonggang Li, Jiangzhang Gan, and Shichao Zhang. "Robust Graph Dimensionality Reduction." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/452.

Full text
Abstract:
In this paper, we propose conducting Robust Graph Dimensionality Reduction (RGDR) by learning a transformation matrix to map original high-dimensional data into their low-dimensional intrinsic space without the influence of outliers. To do this, we propose simultaneously 1) adaptively learning three variables, \ie a reverse graph embedding of original data, a transformation matrix, and a graph matrix preserving the local similarity of original data in their low-dimensional intrinsic space; and 2) employing robust estimators to avoid outliers involving the processes of optimizing these three matrices. As a result, original data are cleaned by two strategies, \ie a prediction of original data based on three resulting variables and robust estimators, so that the transformation matrix can be learnt from accurately estimated intrinsic space with the helping of the reverse graph embedding and the graph matrix. Moreover, we propose a new optimization algorithm to the resulting objective function as well as theoretically prove the convergence of our optimization algorithm. Experimental results indicated that our proposed method outperformed all the comparison methods in terms of different classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
8

Wiest, Tyler, Carolyn Conner Seepersad, and Michael Haberman. "Efficient Design of Acoustic Metamaterials With Design Domains of Variable Size Using Graph Neural Networks." In ASME 2022 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/detc2022-89722.

Full text
Abstract:
Abstract Most metamaterial systems are designed with periodic unit cells to make the underlying design problem more tractable. Shifting to nonperiodic unit cells enables a broader range of physical properties at the expense of higher dimensional design spaces of variable size associated with the adjustable quantity and size of physical features. Representing the physical behavior of these systems with metamodels can enhance the efficiency of the design process, but several challenges must be overcome. Training metamodels for high-dimensional systems requires large volumes of data, and in nonperiodic systems where the quantity and arrangement of structural features is variable, the metamodels must be valid for systems with a broad range of dimensionalities. Furthermore, in acoustic and dynamic applications, responses that are sensitive with respect to frequency compound the issue by requiring dense sampling throughout the spectrum. This paper presents a method to address these challenges by representing these systems as graphs and training purpose-built neural network architectures to update the graphs from state to state. Encoding graph states before — and decoding after — calling the state update functions enables the update functions to maintain generality with respect to dimensionality. The trained update functions are then applicable to systems with dimensionalities beyond those frequently observed in training. By skewing training samples toward lower-dimensional systems that are less computationally expensive to simulate, the computational expense of gathering training data can be reduced with minimal loss of accuracy in predicting the dynamic behavior of higher-dimensional systems. The method is demonstrated by designing an asymmetric acoustic absorber.
APA, Harvard, Vancouver, ISO, and other styles
9

Ramesh, Rahul, Manan Tomar, and Balaraman Ravindran. "Successor Options: An Option Discovery Framework for Reinforcement Learning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/458.

Full text
Abstract:
The options framework in reinforcement learning models the notion of a skill or a temporally extended sequence of actions. The discovery of a reusable set of skills has typically entailed building options, that navigate to bottleneck states. In this work, we instead adopt a complementary approach, where we attempt to discover options that navigate to landmark states. These states are prototypical representatives of well-connected regions and can hence access the associated region with relative ease. In this work, we propose Successor Options, which leverages Successor representations to build a model of the state space. The intra-option policies are learnt using a novel pseudo-reward and the model scales to high-dimensional spaces since it does not construct an explicit graph of the entire state space. Additionally, we also propose an Incremental Successor Options model that iterates between constructing Successor representations and building options, which is useful when robust Successor representations cannot be built solely from primitive actions. We demonstrate the efficacy of our approach on a collection of grid-worlds, and on the high-dimensional robotic control environment of Fetch.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Qixiang, Shanfeng Wang, Maoguo Gong, and Yue Wu. "Feature Hashing for Network Representation Learning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/390.

Full text
Abstract:
The goal of network representation learning is to embed nodes so as to encode the proximity structures of a graph into a continuous low-dimensional feature space. In this paper, we propose a novel algorithm called node2hash based on feature hashing for generating node embeddings. This approach follows the encoder-decoder framework. There are two main mapping functions in this framework. The first is an encoder to map each node into high-dimensional vectors. The second is a decoder to hash these vectors into a lower dimensional feature space. More specifically, we firstly derive a proximity measurement called expected distance as target which combines position distribution and co-occurrence statistics of nodes over random walks so as to build a proximity matrix, then introduce a set of T different hash functions into feature hashing to generate uniformly distributed vector representations of nodes from the proximity matrix. Compared with the existing state-of-the-art network representation learning approaches, node2hash shows a competitive performance on multi-class node classification and link prediction tasks on three real-world networks from various domains.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography