Pour voir les autres types de publications sur ce sujet consultez le lien suivant : High-dimensional sparse graph.

Articles de revues sur le sujet « High-dimensional sparse graph »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « High-dimensional sparse graph ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Xie, Anze, Anders Carlsson, Jason Mohoney, Roger Waleffe, Shanan Peters, Theodoros Rekatsinas et Shivaram Venkataraman. « Demo of marius ». Proceedings of the VLDB Endowment 14, no 12 (juillet 2021) : 2759–62. http://dx.doi.org/10.14778/3476311.3476338.

Texte intégral
Résumé :
Graph embeddings have emerged as the de facto representation for modern machine learning over graph data structures. The goal of graph embedding models is to convert high-dimensional sparse graphs into low-dimensional, dense and continuous vector spaces that preserve the graph structure properties. However, learning a graph embedding model is a resource intensive process, and existing solutions rely on expensive distributed computation to scale training to instances that do not fit in GPU memory. This demonstration showcases Marius: a new open-source engine for learning graph embedding models over billion-edge graphs on a single machine. Marius is built around a recently-introduced architecture for machine learning over graphs that utilizes pipelining and a novel data replacement policy to maximize GPU utilization and exploit the entire memory hierarchy (including disk, CPU, and GPU memory) to scale to large instances. The audience will experience how to develop, train, and deploy graph embedding models using Marius' configuration-driven programming model. Moreover, the audience will have the opportunity to explore Marius' deployments on applications including link-prediction on WikiKG90M and reasoning queries on a paleobiology knowledge graph. Marius is available as open source software at https://marius-project.org.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Liu, Jianyu, Guan Yu et Yufeng Liu. « Graph-based sparse linear discriminant analysis for high-dimensional classification ». Journal of Multivariate Analysis 171 (mai 2019) : 250–69. http://dx.doi.org/10.1016/j.jmva.2018.12.007.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wang, Li-e., et Xianxian Li. « A Clustering-Based Bipartite Graph Privacy-Preserving Approach for Sharing High-Dimensional Data ». International Journal of Software Engineering and Knowledge Engineering 24, no 07 (septembre 2014) : 1091–111. http://dx.doi.org/10.1142/s0218194014500363.

Texte intégral
Résumé :
Driven by mutual benefits, there is a demand for transactional data sharing among organizations or parties for research or business analysis purpose. It becomes an essential concern to provide privacy-preserving data sharing and meanwhile maintain data utility, due to the fact that transactional data may contain sensitive personal information. Existing privacy-preserving methods, such as k-anonymity and l-diversity, cannot handle high-dimensional sparse data well, since they would bring about much data distortion in the anonymization process. In this paper, we use bipartite graphs with node attributes to model high-dimensional sparse data, and then propose a privacy-preserving approach for sharing transactional data in a new vision, in which the bipartite graph is anonymized into a weighted bipartite graph by clustering node attributes. Our approach can maintain privacy of the associations between entities and resist certain attackers with knowledge of partial items. Experiments have been performed on real-life data sets to measure the information loss and the accuracy of answering aggregate queries. Experimental results show that the approach improves the balance of performance between privacy protection and data utility.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Saul, Lawrence K. « A tractable latent variable model for nonlinear dimensionality reduction ». Proceedings of the National Academy of Sciences 117, no 27 (22 juin 2020) : 15403–8. http://dx.doi.org/10.1073/pnas.1916012117.

Texte intégral
Résumé :
We propose a latent variable model to discover faithful low-dimensional representations of high-dimensional data. The model computes a low-dimensional embedding that aims to preserve neighborhood relationships encoded by a sparse graph. The model both leverages and extends current leading approaches to this problem. Like t-distributed Stochastic Neighborhood Embedding, the model can produce two- and three-dimensional embeddings for visualization, but it can also learn higher-dimensional embeddings for other uses. Like LargeVis and Uniform Manifold Approximation and Projection, the model produces embeddings by balancing two goals—pulling nearby examples closer together and pushing distant examples further apart. Unlike these approaches, however, the latent variables in our model provide additional structure that can be exploited for learning. We derive an Expectation–Maximization procedure with closed-form updates that monotonically improve the model’s likelihood: In this procedure, embeddings are iteratively adapted by solving sparse, diagonally dominant systems of linear equations that arise from a discrete graph Laplacian. For large problems, we also develop an approximate coarse-graining procedure that avoids the need for negative sampling of nonadjacent nodes in the graph. We demonstrate the model’s effectiveness on datasets of images and text.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Li, Xinyu, Xiaoguang Gao et Chenfeng Wang. « A Novel BN Learning Algorithm Based on Block Learning Strategy ». Sensors 20, no 21 (7 novembre 2020) : 6357. http://dx.doi.org/10.3390/s20216357.

Texte intégral
Résumé :
Learning accurate Bayesian Network (BN) structures of high-dimensional and sparse data is difficult because of high computation complexity. To learn the accurate structure for high-dimensional and sparse data faster, this paper adopts a divide and conquer strategy and proposes a block learning algorithm with a mutual information based K-means algorithm (BLMKM algorithm). This method utilizes an improved K-means algorithm to block the nodes in BN and a maximum minimum parents and children (MMPC) algorithm to obtain the whole skeleton of BN and find possible graph structures based on separated blocks. Then, a pruned dynamic programming algorithm is performed sequentially for all possible graph structures to get possible BNs and find the best BN by scoring function. Experiments show that for high-dimensional and sparse data, the BLMKM algorithm can achieve the same accuracy in a reasonable time compared with non-blocking classical learning algorithms. Compared to the existing block learning algorithms, the BLMKM algorithm has a time advantage on the basis of ensuring accuracy. The analysis of the real radar effect mechanism dataset proves that BLMKM algorithm can quickly establish a global and accurate causality model to find the cause of interference, predict the detecting result, and guide the parameters optimization. BLMKM algorithm is efficient for BN learning and has practical application value.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Li, Ying, Xiaojun Xu et Jianbo Li. « High-Dimensional Sparse Graph Estimation by Integrating DTW-D Into Bayesian Gaussian Graphical Models ». IEEE Access 6 (2018) : 34279–87. http://dx.doi.org/10.1109/access.2018.2849213.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Dobson, Andrew, et Kostas Bekris. « Improved Heuristic Search for Sparse Motion Planning Data Structures ». Proceedings of the International Symposium on Combinatorial Search 5, no 1 (1 septembre 2021) : 196–97. http://dx.doi.org/10.1609/socs.v5i1.18334.

Texte intégral
Résumé :
Sampling-based methods provide efficient, flexible solutions for motion planning, even for complex, high-dimensional systems. Asymptotically optimal planners ensure convergence to the optimal solution, but produce dense structures. This work shows how to extend sparse methods achieving asymptotic near-optimality using multiple-goal heuristic search during graph constuction. The resulting method produces identical output to the existing Incremental Roadmap Spanner approach but in an order of magnitude less time.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kefato, Zekarias, et Sarunas Girdzijauskas. « Gossip and Attend : Context-Sensitive Graph Representation Learning ». Proceedings of the International AAAI Conference on Web and Social Media 14 (26 mai 2020) : 351–59. http://dx.doi.org/10.1609/icwsm.v14i1.7305.

Texte intégral
Résumé :
Graph representation learning (GRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and often sparse graphs. Most studies explore the structure and metadata associated with the graph using random walks and employ an unsupervised or semi-supervised learning schemes. Learning in these methods is context-free, resulting in only a single representation per node. Recently studies have argued on the adequacy of a single representation and proposed context-sensitive approaches, which are capable of extracting multiple node representations for different contexts. This proved to be highly effective in applications such as link prediction and ranking.However, most of these methods rely on additional textual features that require complex and expensive RNNs or CNNs to capture high-level features or rely on a community detection algorithm to identify multiple contexts of a node.In this study we show that in-order to extract high-quality context-sensitive node representations it is not needed to rely on supplementary node features, nor to employ computationally heavy and complex models. We propose Goat, a context-sensitive algorithm inspired by gossip communication and a mutual attention mechanism simply over the structure of the graph. We show the efficacy of Goat using 6 real-world datasets on link prediction and node clustering tasks and compare it against 12 popular and state-of-the-art (SOTA) baselines. Goat consistently outperforms them and achieves up to 12% and 19% gain over the best performing methods on link prediction and clustering tasks, respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Li, Pei Heng, Taeho Lee et Hee Yong Youn. « Dimensionality Reduction with Sparse Locality for Principal Component Analysis ». Mathematical Problems in Engineering 2020 (20 mai 2020) : 1–12. http://dx.doi.org/10.1155/2020/9723279.

Texte intégral
Résumé :
Various dimensionality reduction (DR) schemes have been developed for projecting high-dimensional data into low-dimensional representation. The existing schemes usually preserve either only the global structure or local structure of the original data, but not both. To resolve this issue, a scheme called sparse locality for principal component analysis (SLPCA) is proposed. In order to effectively consider the trade-off between the complexity and efficiency, a robust L2,p-norm-based principal component analysis (R2P-PCA) is introduced for global DR, while sparse representation-based locality preserving projection (SR-LPP) is used for local DR. Sparse representation is also employed to construct the weighted matrix of the samples. Being parameter-free, this allows the construction of an intrinsic graph more robust against the noise. In addition, simultaneous learning of projection matrix and sparse similarity matrix is possible. Experimental results demonstrate that the proposed scheme consistently outperforms the existing schemes in terms of clustering accuracy and data reconstruction error.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Chen, Dongming, Mingshuo Nie, Hupo Zhang, Zhen Wang et Dongqi Wang. « Network Embedding Algorithm Taking in Variational Graph AutoEncoder ». Mathematics 10, no 3 (2 février 2022) : 485. http://dx.doi.org/10.3390/math10030485.

Texte intégral
Résumé :
Complex networks with node attribute information are employed to represent complex relationships between objects. Research of attributed network embedding fuses the topology and the node attribute information of the attributed network in the common latent representation space, to encode the high-dimensional sparse network information to the low-dimensional dense vector representation, effectively improving the performance of the network analysis tasks. The current research on attributed network embedding is presently facing problems of high-dimensional sparsity of attribute eigenmatrix and underutilization of attribute information. In this paper, we propose a network embedding algorithm taking in a variational graph autoencoder (NEAT-VGA). This algorithm first pre-processes the attribute features, i.e., the attribute feature learning of the network nodes. Then, the feature learning matrix and the adjacency matrix of the network are fed into the variational graph autoencoder algorithm to obtain the Gaussian distribution of the potential vectors, which more easily generate high-quality node embedding representation vectors. Then, the embedding of the nodes obtained by sampling this Gaussian distribution is reconstructed with structural and attribute losses. The loss function is minimized by iterative training until the low-dimension vector representation, containing network structure information and attribute information of nodes, can be better obtained, and the performance of the algorithm is evaluated by link prediction experimental results.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Sturtevant, Nathan. « A Sparse Grid Representation for Dynamic Three-Dimensional Worlds ». Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 7, no 1 (9 octobre 2011) : 73–78. http://dx.doi.org/10.1609/aiide.v7i1.12438.

Texte intégral
Résumé :
Grid representations offer many advantages for path planning. Lookups in grids are fast, due to the uniform memory layout, and it is easy to modify grids. But, grids often have significant memory requirements, they cannot directly represent more complex surfaces, and path planning is slower due to their high granularity representation of the world. The speed of path planning on grids has been addressed using abstract representations, such as has been documented in work on Dragon Age: Origins. The abstract representation used in this game was compact, preventing permanent changes to the grid. In this paper we introduce a sparse grid representation, where grid cells are only stored where necessary. From this sparse representation we incrementally build an abstract graph which represents possible movement in the world at a high-level of granularity. This sparse representation also allows the representation of three-dimensional worlds. This representation allows the world to be incrementally changed in under a millisecond, reducing the maximum memory required to store a map and abstraction from Dragon Age: Origins by nearly one megabyte. Fundamentally, the representation allows previously allocated but unused memory to be used in ways that result in higher-quality planning and more intelligent agents.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Li, Shuang, Bing Liu et Chen Zhang. « Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing ». Computational Intelligence and Neuroscience 2016 (2016) : 1–12. http://dx.doi.org/10.1155/2016/4920670.

Texte intégral
Résumé :
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Wang, Zhan, Qiuqi Ruan et Gaoyun An. « Face Recognition Using Double Sparse Local Fisher Discriminant Analysis ». Mathematical Problems in Engineering 2015 (2015) : 1–9. http://dx.doi.org/10.1155/2015/636928.

Texte intégral
Résumé :
Local Fisher discriminant analysis (LFDA) was proposed for dealing with the multimodal problem. It not only combines the idea of locality preserving projections (LPP) for preserving the local structure of the high-dimensional data but also combines the idea of Fisher discriminant analysis (FDA) for obtaining the discriminant power. However, LFDA also suffers from the undersampled problem as well as many dimensionality reduction methods. Meanwhile, the projection matrix is not sparse. In this paper, we propose double sparse local Fisher discriminant analysis (DSLFDA) for face recognition. The proposed method firstly constructs a sparse and data-adaptive graph with nonnegative constraint. Then, DSLFDA reformulates the objective function as a regression-type optimization problem. The undersampled problem is avoided naturally and the sparse solution can be obtained by adding the regression-type problem to al1penalty. Experiments on Yale, ORL, and CMU PIE face databases are implemented to demonstrate the effectiveness of the proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Majeed, Abdul, et Sungchang Lee. « A Fast Global Flight Path Planning Algorithm Based on Space Circumscription and Sparse Visibility Graph for Unmanned Aerial Vehicle ». Electronics 7, no 12 (2 décembre 2018) : 375. http://dx.doi.org/10.3390/electronics7120375.

Texte intégral
Résumé :
This paper proposes a new flight path planning algorithm that finds collision-free, optimal/near-optimal and flyable paths for unmanned aerial vehicles (UAVs) in three-dimensional (3D) environments with fixed obstacles. The proposed algorithm significantly reduces pathfinding computing time without significantly degrading path lengths by using space circumscription and a sparse visibility graph in the pathfinding process. We devise a novel method by exploiting the information about obstacle geometry to circumscribe the search space in the form of a half cylinder from which a working path for UAV can be computed without sacrificing the guarantees on near-optimality and speed. Furthermore, we generate a sparse visibility graph from the circumscribed space and find the initial path, which is subsequently optimized. The proposed algorithm effectively resolves the efficiency and optimality trade-off by searching the path only from the high priority circumscribed space of a map. The simulation results obtained from various maps, and comparison with the existing methods show the effectiveness of the proposed algorithm and verify the aforementioned claims.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Li, Zixuan, Hao Li, Kenli Li, Fan Wu, Lydia Chen et Keqin Li. « Locality Sensitive Hash Aggregated Nonlinear Neighborhood Matrix Factorization for Online Sparse Big Data Analysis ». ACM/IMS Transactions on Data Science 2, no 4 (30 novembre 2021) : 1–27. http://dx.doi.org/10.1145/3497749.

Texte intégral
Résumé :
Matrix factorization (MF) can extract the low-rank features and integrate the information of the data manifold distribution from high-dimensional data, which can consider the nonlinear neighborhood information. Thus, MF has drawn wide attention for low-rank analysis of sparse big data, e.g., Collaborative Filtering (CF) Recommender Systems, Social Networks, and Quality of Service. However, the following two problems exist: (1) huge computational overhead for the construction of the Graph Similarity Matrix (GSM) and (2) huge memory overhead for the intermediate GSM. Therefore, GSM-based MF, e.g., kernel MF, graph regularized MF, and so on, cannot be directly applied to the low-rank analysis of sparse big data on cloud and edge platforms. To solve this intractable problem for sparse big data analysis, we propose Locality Sensitive Hashing (LSH) aggregated MF (LSH-MF), which can solve the following problems: (1) The proposed probabilistic projection strategy of LSH-MF can avoid the construction of the GSM. Furthermore, LSH-MF can satisfy the requirement for the accurate projection of sparse big data. (2) To run LSH-MF for fine-grained parallelization and online learning on GPUs, we also propose CULSH-MF, which works on CUDA parallelization. Experimental results show that CULSH-MF can not only reduce the computational time and memory overhead but also obtain higher accuracy. Compared with deep learning models, CULSH-MF can not only save training time but also achieve the same accuracy performance.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Yang, Ye, Yongli Hu et Fei Wu. « Sparse and Low-Rank Subspace Data Clustering with Manifold Regularization Learned by Local Linear Embedding ». Applied Sciences 8, no 11 (6 novembre 2018) : 2175. http://dx.doi.org/10.3390/app8112175.

Texte intégral
Résumé :
Data clustering is an important research topic in data mining and signal processing communications. In all the data clustering methods, the subspace spectral clustering methods based on self expression model, e.g., the Sparse Subspace Clustering (SSC) and the Low Rank Representation (LRR) methods, have attracted a lot of attention and shown good performance. The key step of SSC and LRR is to construct a proper affinity or similarity matrix of data for spectral clustering. Recently, Laplacian graph constraint was introduced into the basic SSC and LRR and obtained considerable improvement. However, the current graph construction methods do not well exploit and reveal the non-linear properties of the clustering data, which is common for high dimensional data. In this paper, we introduce the classic manifold learning method, the Local Linear Embedding (LLE), to learn the non-linear structure underlying the data and use the learned local geometry of manifold as a regularization for SSC and LRR, which results the proposed LLE-SSC and LLE-LRR clustering methods. Additionally, to solve the complex optimization problem involved in the proposed models, an efficient algorithm is also proposed. We test the proposed data clustering methods on several types of public databases. The experimental results show that our methods outperform typical subspace clustering methods with Laplacian graph constraint.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Bradley, Patrick, Sina Keller et Martin Weinmann. « Unsupervised Feature Selection Based on Ultrametricity and Sparse Training Data : A Case Study for the Classification of High-Dimensional Hyperspectral Data ». Remote Sensing 10, no 10 (29 septembre 2018) : 1564. http://dx.doi.org/10.3390/rs10101564.

Texte intégral
Résumé :
In this paper, we investigate the potential of unsupervised feature selection techniques for classification tasks, where only sparse training data are available. This is motivated by the fact that unsupervised feature selection techniques combine the advantages of standard dimensionality reduction techniques (which only rely on the given feature vectors and not on the corresponding labels) and supervised feature selection techniques (which retain a subset of the original set of features). Thus, feature selection becomes independent of the given classification task and, consequently, a subset of generally versatile features is retained. We present different techniques relying on the topology of the given sparse training data. Thereby, the topology is described with an ultrametricity index. For the latter, we take into account the Murtagh Ultrametricity Index (MUI) which is defined on the basis of triangles within the given data and the Topological Ultrametricity Index (TUI) which is defined on the basis of a specific graph structure. In a case study addressing the classification of high-dimensional hyperspectral data based on sparse training data, we demonstrate the performance of the proposed unsupervised feature selection techniques in comparison to standard dimensionality reduction and supervised feature selection techniques on four commonly used benchmark datasets. The achieved classification results reveal that involving supervised feature selection techniques leads to similar classification results as involving unsupervised feature selection techniques, while the latter perform feature selection independently from the given classification task and thus deliver generally versatile features.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Zhou, Ruqin, et Wanshou Jiang. « A Ridgeline-Based Terrain Co-registration for Satellite LiDAR Point Clouds in Rough Areas ». Remote Sensing 12, no 13 (6 juillet 2020) : 2163. http://dx.doi.org/10.3390/rs12132163.

Texte intégral
Résumé :
It is still a completely new and challenging task to register extensive, enormous and sparse satellite light detection and ranging (LiDAR) point clouds. Aimed at this problem, this study provides a ridgeline-based terrain co-registration method in preparation for satellite LiDAR point clouds in rough areas. This method has several merits: (1) only ridgelines are extracted as neighbor information for feature description and their intersections are extracted as keypoints, which can greatly reduce the number of points for subsequent processing, and extracted keypoints is of high repeatability and distinctiveness; (2) a new local-reference frame (LRF) construction method is designed by combining both three dimensional (3D) coordinate and normal vector covariance matrices, which effectively improves its direction consistency; (3) a minimum cost–maximum flow (MCMF) graph-matching strategy is adopted to maximize similarity sum in a sparse-matching graph. It can avoid the problem of “many-to-many” and “one to many” caused by traditional matching strategies; (4) a transformation matrix-based clustering is adopted with a least square (LS)-based registration, where mismatches are eliminated and correct pairs are fully participated in optimal parameters evaluation to improve its stability. Experiments on simulated satellite LiDAR point clouds show that this method can effectively remove mismatches and estimate optimal parameters with high accuracy, especially in rough areas.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Majeed, Abdul, et Seong Oun Hwang. « Path Planning Method for UAVs Based on Constrained Polygonal Space and an Extremely Sparse Waypoint Graph ». Applied Sciences 11, no 12 (8 juin 2021) : 5340. http://dx.doi.org/10.3390/app11125340.

Texte intégral
Résumé :
Finding an optimal/quasi-optimal path for Unmanned Aerial Vehicles (UAVs) utilizing full map information yields time performance degradation in large and complex three-dimensional (3D) urban environments populated by various obstacles. A major portion of the computing time is usually wasted on modeling and exploration of spaces that have a very low possibility of providing optimal/sub-optimal paths. However, computing time can be significantly reduced by searching for paths solely in the spaces that have the highest priority of providing an optimal/sub-optimal path. Many Path Planning (PP) techniques have been proposed, but a majority of the existing techniques equally evaluate many spaces of the maps, including unlikely ones, thereby creating time performance issues. Ignoring high-probability spaces and instead exploring too many spaces on maps while searching for a path yields extensive computing-time overhead. This paper presents a new PP method that finds optimal/quasi-optimal and safe (e.g., collision-free) working paths for UAVs in a 3D urban environment encompassing substantial obstacles. By using Constrained Polygonal Space (CPS) and an Extremely Sparse Waypoint Graph (ESWG) while searching for a path, the proposed PP method significantly lowers pathfinding time complexity without degrading the length of the path by much. We suggest an intelligent method exploiting obstacle geometry information to constrain the search space in a 3D polygon form from which a quasi-optimal flyable path can be found quickly. Furthermore, we perform task modeling with an ESWG using as few nodes and edges from the CPS as possible, and we find an abstract path that is subsequently improved. The results achieved from extensive experiments, and comparison with prior methods certify the efficacy of the proposed method and verify the above assertions.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Kiesel, Scott, Ethan Burns et Wheeler Ruml. « Abstraction-Guided Sampling for Motion Planning ». Proceedings of the International Symposium on Combinatorial Search 3, no 1 (20 août 2021) : 162–63. http://dx.doi.org/10.1609/socs.v3i1.18265.

Texte intégral
Résumé :
Motion planning in continuous space is a fundamentalrobotics problem that has been approached from many per-spectives. Rapidly-exploring Random Trees (RRTs) usesampling to efficiently traverse the continuous and high-dimensional state space. Heuristic graph search methods uselower bounds on solution cost to focus effort on portions ofthe space that are likely to be traversed by low-cost solutions.In this work, we bring these two ideas together in a tech-nique called f -biasing: we use estimates of solution cost,computed as in heuristic search, to guide sparse sampling,as in RRTs. We see this new technique as strengthening theconnections between motion planning in robotics and combi-natorial search in artificial intelligence.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Zhang, Zhihao. « A Method of Recommending Physical Education Network Course Resources Based on Collaborative Filtering Technology ». Scientific Programming 2021 (28 octobre 2021) : 1–9. http://dx.doi.org/10.1155/2021/9531111.

Texte intégral
Résumé :
Through the current research on e-learning, it is found that the present e-learning system applied to the recommendation activities of learning resources has only two search methods: Top-N and keywords. These search methods cannot effectively recommend learning resources to learners. Therefore, the collaborative filtering recommendation technology is applied, in this paper, to the process of personalized recommendation of learning resources. We obtain user content and functional interest and predict the comprehensive interest of web and big data through an infinite deep neural network. Based on the collaborative knowledge graph and the collaborative filtering algorithm, the semantic information of teaching network resources is extracted from the collaborative knowledge graph. According to the principles of the nearest neighbor recommendation, the course attribute value preference matrix (APM) is obtained first. Next, the course-predicted values are sorted in descending order, and the top T courses with the highest predicted values are selected as the final recommended course set for the target learners. Each course has its own online classroom; the teacher will publish online class details ahead of time, and students can purchase online access to the classroom number and password. The experimental results show that the optimal number of clusters k is 9. Furthermore, for extremely sparse matrices, the collaborative filtering technique method is more suitable for clustering in the transformed low-dimensional space. The average recommendation satisfaction degree of collaborative filtering technology method is approximately 43.6%, which demonstrates high recommendation quality.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Javidian, Mohammad Ali, Marco Valtorta et Pooyan Jamshidi. « AMP Chain Graphs : Minimal Separators and Structure Learning Algorithms ». Journal of Artificial Intelligence Research 69 (7 octobre 2020) : 419–70. http://dx.doi.org/10.1613/jair.1.12101.

Texte intégral
Résumé :
This paper deals with chain graphs (CGs) under the Andersson–Madigan–Perlman (AMP) interpretation. We address the problem of finding a minimal separator in an AMP CG, namely, finding a set Z of nodes that separates a given non-adjacent pair of nodes such that no proper subset of Z separates that pair. We analyze several versions of this problem and offer polynomial time algorithms for each. These include finding a minimal separator from a restricted set of nodes, finding a minimal separator for two given disjoint sets, and testing whether a given separator is minimal. To address the problem of learning the structure of AMP CGs from data, we show that the PC-like algorithm is order dependent, in the sense that the output can depend on the order in which the variables are given. We propose several modifications of the PC-like algorithm that remove part or all of this order-dependence. We also extend the decomposition-based approach for learning Bayesian networks (BNs) to learn AMP CGs, which include BNs as a special case, under the faithfulness assumption. We prove the correctness of our extension using the minimal separator results. Using standard benchmarks and synthetically generated models and data in our experiments demonstrate the competitive performance of our decomposition-based method, called LCD-AMP, in comparison with the (modified versions of) PC-like algorithm. The LCD-AMP algorithm usually outperforms the PC-like algorithm, and our modifications of the PC-like algorithm learn structures that are more similar to the underlying ground truth graphs than the original PC-like algorithm, especially in high-dimensional settings. In particular, we empirically show that the results of both algorithms are more accurate and stabler when the sample size is reasonably large and the underlying graph is sparse
Styles APA, Harvard, Vancouver, ISO, etc.
23

Chen, Junting, Liyun Zhong et Caiyun Cai. « Using Exponential Kernel for Semi-Supervised Word Sense Disambiguation ». Journal of Computational and Theoretical Nanoscience 13, no 10 (1 octobre 2016) : 6929–34. http://dx.doi.org/10.1166/jctn.2016.5649.

Texte intégral
Résumé :
Word sense disambiguation (WSD) in natural language text is a fundamental semantic understanding task at the lexical level in natural language processing (NLP) applications. Kernel methods such as support vector machine (SVM) have been successfully applied to WSD. This is mainly due to their relatively high classification accuracy as well as their ability to handle high dimensional and sparse data. A significant challenge in WSD is to reduce the need for labeled training data while maintaining an acceptable performance. In this paper, we present a semi-supervised technique using the exponential kernel for WSD. Specifically, the semantic similarities between terms are first determined with both labeled and unlabeled training data by means of a diffusion process on a graph defined by lexicon and co-occurrence information, and the exponential kernel is then constructed based on the learned semantic similarity. Finally, the SVM classifier trains a model for each class during the training phase and this model is then applied to all test examples in the test phase. The main feature of this approach is that it takes advantage of the exponential kernel to reveal the semantic similarities between terms in an unsupervised manner, which provides a kernel framework for semi-supervised learning. Experiments on several SENSEVAL benchmark data sets demonstrate the proposed approach is sound and effective.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Zhao, Haitao, Sujay Datta et Zhong-Hui Duan. « An Integrated Approach of Learning Genetic Networks From Genome-Wide Gene Expression Data Using Gaussian Graphical Model and Monte Carlo Method ». Bioinformatics and Biology Insights 17 (janvier 2023) : 117793222311529. http://dx.doi.org/10.1177/11779322231152972.

Texte intégral
Résumé :
Global genetic networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single genes or local networks. The Gaussian graphical model (GGM) is widely applied to learn genetic networks because it defines an undirected graph decoding the conditional dependence between genes. Many algorithms based on the GGM have been proposed for learning genetic network structures. Because the number of gene variables is typically far more than the number of samples collected, and a real genetic network is typically sparse, the graphical lasso implementation of GGM becomes a popular tool for inferring the conditional interdependence among genes. However, graphical lasso, although showing good performance in low dimensional data sets, is computationally expensive and inefficient or even unable to work directly on genome-wide gene expression data sets. In this study, the method of Monte Carlo Gaussian graphical model (MCGGM) was proposed to learn global genetic networks of genes. This method uses a Monte Carlo approach to sample subnetworks from genome-wide gene expression data and graphical lasso to learn the structures of the subnetworks. The learned subnetworks are then integrated to approximate a global genetic network. The proposed method was evaluated with a relatively small real data set of RNA-seq expression levels. The results indicate the proposed method shows a strong ability of decoding the interactions with high conditional dependences among genes. The method was then applied to genome-wide data sets of RNA-seq expression levels. The gene interactions with high interdependence from the estimated global networks show that most of the predicted gene-gene interactions have been reported in the literatures playing important roles in different human cancers. Also, the results validate the ability and reliability of the proposed method to identify high conditional dependences among genes in large-scale data sets.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Rütimann, Philipp, et Peter Bühlmann. « High dimensional sparse covariance estimation via directed acyclic graphs ». Electronic Journal of Statistics 3 (2009) : 1133–60. http://dx.doi.org/10.1214/09-ejs534.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

A.R., Urakov, et Timeryaev T.V. « All-Pairs Shortest Paths Algorithm for High-dimensional Sparse Graphs ». Computer Science and Information Technology 1, no 3 (novembre 2013) : 233–37. http://dx.doi.org/10.13189/csit.2013.010309.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Han, Fang, Xiaoyan Han, Han Liu et Brian Caffo. « Sparse median graphs estimation in a high-dimensional semiparametric model ». Annals of Applied Statistics 10, no 3 (septembre 2016) : 1397–426. http://dx.doi.org/10.1214/16-aoas940.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Songhafouo Tsopméné, Paul Arnaud, et Victor Turchin. « Rational homology and homotopy of high-dimensional string links ». Forum Mathematicum 30, no 5 (1 septembre 2018) : 1209–35. http://dx.doi.org/10.1515/forum-2016-0192.

Texte intégral
Résumé :
AbstractArone and the second author showed that when the dimensions are in the stable range, the rational homology and homotopy of the high-dimensional analogues of spaces of long knots can be calculated as the homology of a direct sum of finite graph-complexes that they described explicitly. They also showed that these homology and homotopy groups can be interpreted as the higher-order Hochschild homology, also called Hochschild–Pirashvili homology. In this paper, we generalize all these results to high-dimensional analogues of spaces of string links. The methods of our paper are applicable in the range when the ambient dimension is at least twice the maximal dimension of a link component plus two, which in particular guarantees that the spaces under study are connected. However, we conjecture that our homotopy graph-complex computes the rational homotopy groups of link spaces always when the codimension is greater than two, i.e. always when the Goodwillie–Weiss calculus is applicable. Using Haefliger’s approach to calculate the groups of isotopy classes of higher-dimensional links, we confirm our conjecture at the level of {\pi_{0}}.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Hurley, C. B., et R. W. Oldford. « Graphs as navigational infrastructure for high dimensional data spaces ». Computational Statistics 26, no 4 (11 février 2011) : 585–612. http://dx.doi.org/10.1007/s00180-011-0228-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Shojaie, A., et G. Michailidis. « Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs ». Biometrika 97, no 3 (9 juillet 2010) : 519–38. http://dx.doi.org/10.1093/biomet/asq038.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

ZHANG, WEN, et BRUCE J. WEST. « BOX DIMENSION OF COLORED NOISE AND DETERMINISTIC TIME SERIES IN HIGH DIMENSIONAL EUCLIDEAN SPACES ». Fractals 04, no 01 (mars 1996) : 91–95. http://dx.doi.org/10.1142/s0218348x9600011x.

Texte intégral
Résumé :
We investigate the box dimension of a time series having an inverse power-law spectra in a high dimensional Euclidean space. The time series can be random (colored noise) or deterministic. Both isotropic and anisotropic cases are included in our investigation. We study both the graph dimension and trail dimension of the time series. We show that with the same inverse power-law spectra, the deterministic series has a lower graph dimension than that of the colored noise, though they both can have fractal dimensions. We also derive a sharp upper bound on the trial dimension of the time series.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Tan, Zhen, Xiang Zhao, Yang Fang, Bin Ge et Weidong Xiao. « Knowledge Graph Representation via Similarity-Based Embedding ». Scientific Programming 2018 (15 juillet 2018) : 1–12. http://dx.doi.org/10.1155/2018/6325635.

Texte intégral
Résumé :
Knowledge graph, a typical multi-relational structure, includes large-scale facts of the world, yet it is still far away from completeness. Knowledge graph embedding, as a representation method, constructs a low-dimensional and continuous space to describe the latent semantic information and predict the missing facts. Among various solutions, almost all embedding models have high time and memory-space complexities and, hence, are difficult to apply to large-scale knowledge graphs. Some other embedding models, such as TransE and DistMult, although with lower complexity, ignore inherent features and only use correlations between different entities to represent the features of each entity. To overcome these shortcomings, we present a novel low-complexity embedding model, namely, SimE-ER, to calculate the similarity of entities in independent and associated spaces. In SimE-ER, each entity (relation) is described as two parts. The entity (relation) features in independent space are represented by the features entity (relation) intrinsically owns and, in associated space, the entity (relation) features are expressed by the entity (relation) features they connect. And the similarity between the embeddings of the same entities in different representation spaces is high. In experiments, we evaluate our model with two typical tasks: entity prediction and relation prediction. Compared with the state-of-the-art models, our experimental results demonstrate that SimE-ER outperforms existing competitors and has low time and memory-space complexities.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Le, Tuan M. V., et Hady W. Lauw. « Semantic Visualization with Neighborhood Graph Regularization ». Journal of Artificial Intelligence Research 55 (28 avril 2016) : 1091–133. http://dx.doi.org/10.1613/jair.4983.

Texte intégral
Résumé :
Visualization of high-dimensional data, such as text documents, is useful to map out the similarities among various data points. In the high-dimensional space, documents are commonly represented as bags of words, with dimensionality equal to the vocabulary size. Classical approaches to document visualization directly reduce this into visualizable two or three dimensions. Recent approaches consider an intermediate representation in topic space, between word space and visualization space, which preserves the semantics by topic modeling. While aiming for a good fit between the model parameters and the observed data, previous approaches have not considered the local consistency among data instances. We consider the problem of semantic visualization by jointly modeling topics and visualization on the intrinsic document manifold, modeled using a neighborhood graph. Each document has both a topic distribution and visualization coordinate. Specifically, we propose an unsupervised probabilistic model, called SEMAFORE, which aims to preserve the manifold in the lower-dimensional spaces through a neighborhood regularization framework designed for the semantic visualization task. To validate the efficacy of SEMAFORE, our comprehensive experiments on a number of real-life text datasets of news articles and Web pages show that the proposed methods outperform the state-of-the-art baselines on objective evaluation metrics.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Madoš, Branislav, Eva Chovancová, Martin Chovanec et Norbert Ádám. « CSVO : Clustered Sparse Voxel Octrees—A Hierarchical Data Structure for Geometry Representation of Voxelized 3D Scenes ». Symmetry 14, no 10 (12 octobre 2022) : 2114. http://dx.doi.org/10.3390/sym14102114.

Texte intégral
Résumé :
When representing the geometry of voxelized three-dimensional scenes (especially if they have been voxelized to high resolutions) in a naive—uncompressed—form, one may end up using vast amounts of data. These can easily attack the available memory capacity of the graphics card, the operating memory or even secondary storage of computer. A viable solution to this problem is to use domain-specific hierarchical data structures, based on octant trees or directed acyclic graphs, which, among other advantages, provide a compact binary representation that can thus be considered to be their compressed encoding. These data structures include—inter alia—sparse voxel octrees, sparse voxel directed acyclic graphs and symmetry-aware sparse voxel directed acyclic graphs. The paper deals with the proposal of a new domain-specific hierarchical data structure: the clustered sparse voxel octrees. It is designed to represent the geometry of voxelized three-dimensional scenes and can be constructed using the out-of-core algorithm proposed in the paper. The advantage of the presented data structure is in its compact binary representation, achieved by omitting a significant number of pointers to child nodes (82.55% in case of Angel Lucy model in 1283 voxels resolution) and by using a wider range of child node pointer lengths, including 8b, 16b and 32b. We achieved from 6.57 to 6.82 times more compact encoding, compared to sparse voxel octrees, whose all node components were 32b aligned, and from 4.11 to 4.27 times more compact encoding, when not all node components were 32b aligned.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Arone, Gregory, et Victor Turchin. « Graph-complexes computing the rational homotopy of high dimensional analogues of spaces of long knots ». Annales de l'Institut Fourier 65, no 1 (2015) : 1–62. http://dx.doi.org/10.5802/aif.2924.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Chen, Shicong, Deyu Yuan, Shuhua Huang et Yang Chen. « Link Prediction and Node Classification Based on Multitask Graph Autoencoder ». Wireless Communications and Mobile Computing 2021 (17 avril 2021) : 1–13. http://dx.doi.org/10.1155/2021/5537651.

Texte intégral
Résumé :
The goal of network representation learning is to extract deep-level abstraction from data features that can also be viewed as a process of transforming the high-dimensional data to low-dimensional features. Learning the mapping functions between two vector spaces is an essential problem. In this paper, we propose a new similarity index based on traditional machine learning, which integrates the concepts of common neighbor, local path, and preferential attachment. Furthermore, for applying the link prediction methods to the field of node classification, we have innovatively established an architecture named multitask graph autoencoder. Specifically, in the context of structural deep network embedding, the architecture designs a framework of high-order loss function by calculating the node similarity from multiple angles so that the model can make up for the deficiency of the second-order loss function. Through the parameter fine-tuning, the high-order loss function is introduced into the optimized autoencoder. Proved by the effective experiments, the framework is generally applicable to the majority of classical similarity indexes.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Bhaduri, Susmita, et Anirban Bhaduri. « Searching for Supersymmetry at LHC Using the Complex-Network-Based Method of the Three-Dimensional Visibility-Graph ». Physics 2, no 3 (10 septembre 2020) : 436–54. http://dx.doi.org/10.3390/physics2030025.

Texte intégral
Résumé :
For the last several decades, there has been tremendous interest in search for Supersymmetry (SUSY) in the area of high energy physics. At Large Hadron Collider (LHC), there have been continuous searches for SUSY for prompt and non-prompt, for particle R-parity conserving and R-parity violating generation and decays. The limits obtained from these experiments and analyses for detection of the signatures of supersymmetric particles (LSP), revealed greater possibilities of such experiments in the collider. However, these signatures are usually derived under the assumption of bit optimistic conditions of the decaying process of sparticles to the final states. Moreover, SUSY might have been in a disguised state at lower mass-scales as a result of difficult and challenging mass spectra and mixed modes of decays. In this investigation, a novel method of 3-dimensional (3D) Visibility-Graph Analysis is proposed. This is an extension of Visibility Graph analysis of data series to perform the scaling analysis for 3D space. The experimental data spaces analyzed are made up of the component-space (in the X,Y and Z coordinates) of transverse momentum (pT) values taken out from 4-momenta of the signatures of the final state of the pair of mega-jets extracted from the multiJet primary pp collision data from Run B of 2010 at 7 TeV which was used for the search of SUSY using razor filter. The symmetry scaling and the inherent scaling behavior, scale-freeness of multi-particle production process is studied in terms of 3D Power-of-Scale-freeness-of-Visibility-Graph (3D-PSVG) extracted from the 3D Visibility Graphs constructed out of the experimental data spaces. The signature of SUSY may be identified by analyzing the scaling behavior and long-range correlation inherent in the 3D space made up of signatures of final state of multi-particles produced in the pp collision at 7 TeV, for the analysis of SUSY, which the conventional method of analyzing the spectrum of invariant mass or pT may miss.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Kopitkov, Dmitry, et Vadim Indelman. « No belief propagation required : Belief space planning in high-dimensional state spaces via factor graphs, the matrix determinant lemma, and re-use of calculation ». International Journal of Robotics Research 36, no 10 (10 août 2017) : 1088–130. http://dx.doi.org/10.1177/0278364917721629.

Texte intégral
Résumé :
We develop a computationally efficient approach for evaluating the information-theoretic term within belief space planning (BSP), where during belief propagation the state vector can be constant or augmented. We consider both unfocused and focused problem settings, whereas uncertainty reduction of the entire system or only of chosen variables is of interest, respectively. State-of-the-art approaches typically propagate the belief state, for each candidate action, through calculation of the posterior information (or covariance) matrix and subsequently compute its determinant (required for entropy). In contrast, our approach reduces runtime complexity by avoiding these calculations. We formulate the problem in terms of factor graphs and show that belief propagation is not needed, requiring instead a one-time calculation that depends on (the increasing with time) state dimensionality, and per-candidate calculations that are independent of the latter. To that end, we develop an augmented version of the matrix determinant lemma, and show that computations can be re-used when evaluating impact of different candidate actions. These two key ingredients and the factor graph representation of the problem result in a computationally efficient (augmented) BSP approach that accounts for different sources of uncertainty and can be used with various sensing modalities. We examine the unfocused and focused instances of our approach, and compare it with the state of the art, in simulation and using real-world data, considering problems such as autonomous navigation in unknown environments, measurement selection and sensor deployment. We show that our approach significantly reduces running time without any compromise in performance.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Xie, Luodi, Huimin Huang et Qing Du. « A Co-Embedding Model with Variational Auto-Encoder for Knowledge Graphs ». Applied Sciences 12, no 2 (12 janvier 2022) : 715. http://dx.doi.org/10.3390/app12020715.

Texte intégral
Résumé :
Knowledge graph (KG) embedding has been widely studied to obtain low-dimensional representations for entities and relations. It serves as the basis for downstream tasks, such as KG completion and relation extraction. Traditional KG embedding techniques usually represent entities/relations as vectors or tensors, mapping them in different semantic spaces and ignoring the uncertainties. The affinities between entities and relations are ambiguous when they are not embedded in the same latent spaces. In this paper, we incorporate a co-embedding model for KG embedding, which learns low-dimensional representations of both entities and relations in the same semantic space. To address the issue of neglecting uncertainty for KG components, we propose a variational auto-encoder that represents KG components as Gaussian distributions. In addition, compared with previous methods, our method has the advantages of high quality and interpretability. Our experimental results on several benchmark datasets demonstrate our model’s superiority over the state-of-the-art baselines.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Chakraborty, Shubhadeep, et Ali Shojaie. « Nonparametric Causal Structure Learning in High Dimensions ». Entropy 24, no 3 (28 février 2022) : 351. http://dx.doi.org/10.3390/e24030351.

Texte intégral
Résumé :
The PC and FCI algorithms are popular constraint-based methods for learning the structure of directed acyclic graphs (DAGs) in the absence and presence of latent and selection variables, respectively. These algorithms (and their order-independent variants, PC-stable and FCI-stable) have been shown to be consistent for learning sparse high-dimensional DAGs based on partial correlations. However, inferring conditional independences from partial correlations is valid if the data are jointly Gaussian or generated from a linear structural equation model—an assumption that may be violated in many applications. To broaden the scope of high-dimensional causal structure learning, we propose nonparametric variants of the PC-stable and FCI-stable algorithms that employ the conditional distance covariance (CdCov) to test for conditional independence relationships. As the key theoretical contribution, we prove that the high-dimensional consistency of the PC-stable and FCI-stable algorithms carry over to general distributions over DAGs when we implement CdCov-based nonparametric tests for conditional independence. Numerical studies demonstrate that our proposed algorithms perform nearly as good as the PC-stable and FCI-stable for Gaussian distributions, and offer advantages in non-Gaussian graphical models.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Barto, Libor, Jakub Bulín, Andrei Krokhin et Jakub Opršal. « Algebraic Approach to Promise Constraint Satisfaction ». Journal of the ACM 68, no 4 (7 juillet 2021) : 1–66. http://dx.doi.org/10.1145/3457606.

Texte intégral
Résumé :
The complexity and approximability of the constraint satisfaction problem (CSP) has been actively studied over the past 20 years. A new version of the CSP, the promise CSP (PCSP), has recently been proposed, motivated by open questions about the approximability of variants of satisfiability and graph colouring. The PCSP significantly extends the standard decision CSP. The complexity of CSPs with a fixed constraint language on a finite domain has recently been fully classified, greatly guided by the algebraic approach, which uses polymorphisms—high-dimensional symmetries of solution spaces—to analyse the complexity of problems. The corresponding classification for PCSPs is wide open and includes some long-standing open questions, such as the complexity of approximate graph colouring, as special cases. The basic algebraic approach to PCSP was initiated by Brakensiek and Guruswami, and in this article, we significantly extend it and lift it from concrete properties of polymorphisms to their abstract properties. We introduce a new class of problems that can be viewed as algebraic versions of the (Gap) Label Cover problem and show that every PCSP with a fixed constraint language is equivalent to a problem of this form. This allows us to identify a “measure of symmetry” that is well suited for comparing and relating the complexity of different PCSPs via the algebraic approach. We demonstrate how our theory can be applied by giving both general and specific hardness/tractability results. Among other things, we improve the state-of-the-art in approximate graph colouring by showing that, for any k ≥ 3, it is NP-hard to find a (2 k -1)-colouring of a given k -colourable graph.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Hong, Yi, Chuanwen Luo, Deying Li, Zhibo Chen et Xiyun Wang. « Lifetime-Maximized Strong Barrier Coverage of 3D Camera Sensor Networks ». Wireless Communications and Mobile Computing 2022 (19 septembre 2022) : 1–12. http://dx.doi.org/10.1155/2022/2659901.

Texte intégral
Résumé :
Camera sensor networks (CSNs) have advantages on providing the precise and multimedia information for plenty of applications. The high coverage quality of CSNs especially satisfies the monitoring requirements of barrier coverage. In three-dimensional (3D) application scenarios, the tracking of the potential intruder in the monitored irregular spaces brings more difficulties and challenges on strong barrier coverage for CSNs. In this paper, we consider the strong barrier coverage problem in 3D CSNs and focus on the objective of monitoring the intruder with high resolution and maximizing the network lifetime. We firstly introduce the definition and hardness proof for the problem based on the irregular space model and the network model, which adopts the Region of Interest (ROI) sensing model with high effective resolution. Secondly, we design two sleep-and-awake scheduling algorithms for the problem in homogeneous and heterogeneous networks, respectively, which are based on the auxiliary graph transformation and the disjoint flows construction. To evaluate these algorithms’ performance on the lifetime maximization, we conduct extensive simulation experiments and analyze their results on their advantages and applicable scenarios.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Varoudis, Tasos, Abigail G. Swenson, Scott D. Kirkton et James S. Waters. « Exploring nest structures of acorn dwelling ants with X-ray microtomography and surface-based three-dimensional visibility graph analysis ». Philosophical Transactions of the Royal Society B : Biological Sciences 373, no 1753 (2 juillet 2018) : 20170237. http://dx.doi.org/10.1098/rstb.2017.0237.

Texte intégral
Résumé :
The physical spaces within which organisms live affect their biology and in many cases can be considered part of their extended phenotype. The nests of social insect societies have a fundamental impact on their ability to function as complex superorganisms. Ants in many species excavate elaborate subterranean nests, but others inhabit relatively small pre-formed cavities within rock crevices and hollow seeds. Temnothorax ants, which often nest within acorns, have become a model system for studying collective decision making. While these ants have demonstrated remarkable degrees of rationality and consistent precision with regard to their nest choices, never before has the fine scale internal architecture and spatial organization of their nests been investigated. We used X-ray microtomography to record high-resolution three-dimensional (3D) scans of Temnothorax colonies within their acorns. These data were then quantified using image segmentation and surface-based 3D visibility graph analysis, a new computational methodology for analysing spatial structures. The visibility graph analysis method integrates knowledge from the field of architecture with the empirical study of animal-built structures, thus providing the first methodological cross-disciplinary synergy of these two research areas. We found a surprisingly high surface area and degree of spatial heterogeneity within the acorn nests. Specific regions, such as those associated with the locations of queens and brood, were significantly more conducive to connectivity than others. From an architect's point of view, spatial analysis research has never focused on all-surface 3D movement, as we describe within ant nests. Therefore, we believe our approach will provide new methods for understanding both human design and the comparative biology of habitat spaces. This article is part of the theme issue ‘Interdisciplinary approaches for uncovering the impacts of architecture on collective behaviour'.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Orlova, Svetlana, et Alexander Lopota. « Scene recognition for confined spaces in mobile robotics : current state and tendencies ». Robotics and Technical Cybernetics 10, no 1 (mars 2022) : 14–24. http://dx.doi.org/10.31776/rtcj.10102.

Texte intégral
Résumé :
The article discusses the problem of scene recognition for mobile robotics. Subtasks that have to be solved to implement a high-level understanding of the environment are considered. The basis here is an understanding of the geometry and semantics of the scene, which can be decomposed into subtasks of robot localization, mapping and semantic analysis. Simultaneous localization and mapping (SLAM) techniques have already been successfully applied and, although they have some as yet unresolved problems for dynamic environments, do not present a problem for this issue. The focus of the work is on the task of semantic analysis of the scene, which assumes three-dimensional segmentation. The field of 3D segmentation, like the field of image segmentation, has been decomposed into semantic and object segmentation, contrary to the needs of many potential applications. However, at present, panoptic segmentation is beginning to develop, combining the two previous ones and most fully describing the scene. The paper reviews the methods of 3D panoptic segmentation, identifies promising approaches. The actual problems of the scene recognition problem are also discussed. There is a clear trend towards the development of complex incremental methods of metric-semantic SLAM, which combine segmentation with SLAM methods, and the use of scene graphs, which allow describing the geometry, semantics of scene elements and the relationship between them. Scene graphs are especially promising for the field of mobile robotics, since they provide a transition from low-level representations of objects and spaces (for example, segmented point clouds) to describing a scene at a high level of abstraction, close to a human one (a list of objects in a scene, their properties and location relative to each other).
Styles APA, Harvard, Vancouver, ISO, etc.
45

Baayen, R. Harald, Yu-Ying Chuang et James P. Blevins. « Inflectional morphology with linear mappings ». Mental Lexicon 13, no 2 (31 décembre 2018) : 230–68. http://dx.doi.org/10.1075/ml.18010.baa.

Texte intégral
Résumé :
Abstract This methodological study provides a step-by-step introduction to a computational implementation of word and paradigm morphology using linear mappings between vector spaces for form and meaning. Taking as starting point the linear regression model, the main concepts underlying linear mappings are introduced and illustrated with R code. It is then shown how vector spaces can be set up for Latin verb conjugations, using 672 inflected variants of two verbs each from the four main conjugation classes. It turns out that mappings from form to meaning (comprehension), and from meaning to form (production) can be carried out loss-free. This study concludes with a demonstration that when the graph of triphones, the units that underlie the form space, is mapped onto a 2-dimensional space with a self-organising algorithm from physics (graphopt), morphological functions show topological clustering, even though morphemic units do not play any role whatsoever in the model. It follows, first, that evidence for morphemes emerging from experimental studies using, for instance, fMRI, to localize morphemes in the brain, does not guarantee the existence of morphemes in the brain, and second, that potential topological organization of morphological form in the cortex may depend to a high degree on the morphological system of a language.
Styles APA, Harvard, Vancouver, ISO, etc.
46

WU, XIAODONG. « EFFICIENT ALGORITHMS FOR THE OPTIMAL-RATIO REGION DETECTION PROBLEMS IN DISCRETE GEOMETRY WITH APPLICATIONS ». International Journal of Computational Geometry & ; Applications 19, no 02 (avril 2009) : 141–59. http://dx.doi.org/10.1142/s0218195909002873.

Texte intégral
Résumé :
In this paper, we study several interesting optimal-ratio region detection (ORD) problems in d- D (d ≥ 3) discrete geometric spaces, which arise in high dimensional medical image segmentation. Given a d- D voxel grid of n cells, two classes of geometric regions that are enclosed by a single or two coupled smooth heighfield surfaces defined on the entire grid domain are considered. The objective functions are normalized by a function of the desired regions, which avoids a bias to produce an overly large or small region resulting from data noise. The normalization functions that we employ are used in real medical image segmentation. To our best knowledge, no previous results on these problems in high dimensions are known. We develop a unified algorithmic framework based on a careful characterization of the intrinsic geometric structures and a nontrivial graph transformation scheme, yielding efficient polynomial time algorithms for solving these ORD problems. Our main ideas include the following. We observe that the optimal solution to the ORD problems can be obtained via the construction of a convex hull for a set of O(n) unknown 2-D points using the hand probing technique. The probing oracles are implemented by computing a minimum s-t cut in a weighted directed graph. The ORD problems are then solved by O(n) calls to the minimum s-t cut algorithm. For the class of regions bounded by a single heighfield surface, our further investigation shows that the O(n) calls to the minimum s-t cut algorithm are on a monotone parametric flow network, which enables to detect the optimal-ratio region in the complexity of computing a single maximum flow.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Suzuki, Ikumi, Kazuo Hara, Masashi Shimbo, Yuji Matsumoto et Marco Saerens. « Investigating the Effectiveness of Laplacian-Based Kernels in Hub Reduction ». Proceedings of the AAAI Conference on Artificial Intelligence 26, no 1 (20 septembre 2021) : 1112–18. http://dx.doi.org/10.1609/aaai.v26i1.8295.

Texte intégral
Résumé :
A “hub” is an object closely surrounded by, or very similar to, many other objects in the dataset. Recent studies by Radovanovi´c et al. indicate that in high dimensional spaces, hubs almost always emerge, and objects close to the data centroid tend to become hubs. In this paper, we show that the family of kernels based on the graph Laplacian makes all objects in the dataset equally similar to the centroid, and thus they are expected to make less hubs when used as a similarity measure. We investigate this hypothesis using both synthetic and real-world data. It turns out that these kernels suppress hubs in some cases but not always, and the results seem to be affected by the size of the data—a factor not discussed previously. However, for the datasets in which hubs are indeed reduced by the Laplacian-based kernels, these kernels work well in ranking and classification tasks. This result suggests that the amount of hubs, which can be readily computed in an unsupervised fashion, can be a yardstick of whether Laplacian-based kernels work effectively for a given data.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Arthur, John Kingsley, Conghua Zhou, Eric Appiah Mantey, Jeremiah Osei-Kwakye et Yaru Chen. « A Discriminative-Based Geometric Deep Learning Model for Cross Domain Recommender Systems ». Applied Sciences 12, no 10 (20 mai 2022) : 5202. http://dx.doi.org/10.3390/app12105202.

Texte intégral
Résumé :
Recommender systems (RS) have been widely deployed in many real-world applications, but usually suffer from the long-standing user/item cold-start problem. As a promising approach, cross-domain recommendation (CDR), which has attracted a surge of interest, aims to transfer the user preferences observed in the source domain to make recommendations in the target domain. Traditional machine learning and deep learning methods are not designed to learn from complex data representations such as graphs, manifolds and 3D objects. However, current trends in data generation include these complex data representations. In addition, existing research works do not consider the complex dimensions and the locality structure of items, which however, contain more discriminative information essential for improving the performance accuracy of the recommender system. Furthermore, similar outcomes between test samples and their neighboring training data restrained in the kernel space are not fully realized from the recommended objects belonging to the same object category to capture the embedded discriminative information effectively. These challenges leave the problem of sparsity and the cold-start of items/users unsolved and hence impede the performance of the cross-domain recommender system, causing it to suggest less relevant and undistinguished items to the user. To handle these challenges, we propose a novel deep learning (DL) method, Discriminative Geometric Deep Learning (D-GDL) for cross-domain recommender systems. In the proposed D-GDL, a discriminative function based on sparse local sensitivity is introduced into the structure of the DL network. In the D-GDL, a local representation learning (i.e., a local sensitivity-based deep convolutional belief network) is introduced into the structure of the DL network to effectively capture the local geometric and visual information from the structure of the recommended 3D objects. A kernel-based method (i.e., a local sensitivity deep belief network) is also incorporated into the structure of the DL framework to map the complex structure of recommended objects into high dimensional feature space and achieve an effective recognition result. An improved kernel density estimator is created to serve as a weighing function in building a high dimensional feature space, which makes it more resistant to geometric noise and computation performance. The experiment results show that the proposed D-GDL significantly outperforms the state-of-the-art methods in both sparse and dense settings for cross-domain recommendation tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Schmitz, M. A., J. L. Starck, F. Ngole Mboula, N. Auricchio, J. Brinchmann, R. I. Vito Capobianco, R. Clédassou et al. « Euclid : Nonparametric point spread function field recovery through interpolation on a graph Laplacian ». Astronomy & ; Astrophysics 636 (avril 2020) : A78. http://dx.doi.org/10.1051/0004-6361/201936094.

Texte intégral
Résumé :
Context. Future weak lensing surveys, such as the Euclid mission, will attempt to measure the shapes of billions of galaxies in order to derive cosmological information. These surveys will attain very low levels of statistical error, and systematic errors must be extremely well controlled. In particular, the point spread function (PSF) must be estimated using stars in the field, and recovered with high accuracy. Aims. The aims of this paper are twofold. Firstly, we took steps toward a nonparametric method to address the issue of recovering the PSF field, namely that of finding the correct PSF at the position of any galaxy in the field, applicable to Euclid. Our approach relies solely on the data, as opposed to parametric methods that make use of our knowledge of the instrument. Secondly, we studied the impact of imperfect PSF models on the shape measurement of galaxies themselves, and whether common assumptions about this impact hold true in an Euclid scenario. Methods. We extended the recently proposed resolved components analysis approach, which performs super-resolution on a field of under-sampled observations of a spatially varying, image-valued function. We added a spatial interpolation component to the method, making it a true 2-dimensional PSF model. We compared our approach to PSFEx, then quantified the impact of PSF recovery errors on galaxy shape measurements through image simulations. Results. Our approach yields an improvement over PSFEx in terms of the PSF model and on observed galaxy shape errors, though it is at present far from reaching the required Euclid accuracy. We also find that the usual formalism used for the propagation of PSF model errors to weak lensing quantities no longer holds in the case of an Euclid-like PSF. In particular, different shape measurement approaches can react differently to the same PSF modeling errors.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Kopitkov, Dmitry, et Vadim Indelman. « General-purpose incremental covariance update and efficient belief space planning via a factor-graph propagation action tree ». International Journal of Robotics Research 38, no 14 (10 septembre 2019) : 1644–73. http://dx.doi.org/10.1177/0278364919875199.

Texte intégral
Résumé :
Fast covariance calculation is required both for simultaneous localization and mapping (SLAM; e.g., in order to solve data association) and for evaluating the information-theoretic term for different candidate actions in belief space planning (BSP). In this article, we make two primary contributions. First, we develop a novel general-purpose incremental covariance update technique, which efficiently recovers specific covariance entries after any change in probabilistic inference, such as the introduction of new observations/variables or relinearization. Our approach is shown to recover them faster than other state-of-the-art methods. Second, we present a computationally efficient approach for BSP in high-dimensional state spaces, leveraging our incremental covariance update method. State-of-the-art BSP approaches perform belief propagation for each candidate action and then evaluate an objective function that typically includes an information-theoretic term, such as entropy or information gain. Yet, candidate actions often have similar parts (e.g., common trajectory parts), which are however evaluated separately for each candidate. Moreover, calculating the information-theoretic term involves a costly determinant computation of the entire information (covariance) matrix, which is [Formula: see text] with [Formula: see text] being dimension of the state or costly Schur complement operations if only marginal posterior covariance of certain variables is of interest. Our approach, rAMDL-Tree, extends our previous BSP method rAMDL, by exploiting incremental covariance calculation and performing calculation reuse between common parts of non-myopic candidate actions, such that these parts are evaluated only once, in contrast to existing approaches. To that end, we represent all candidate actions together in a single unified graphical model, which we introduce and call a factor-graph propagation (FGP) action tree. Each arrow (edge) of the FGP action tree represents a sub-action of one (or more) candidate action sequence(s) and in order to evaluate its information impact we require specific covariance entries of an intermediate belief represented by the tree’s vertex from which the edge is coming out (e.g., tail of the arrow). Overall, our approach has only a one-time calculation that depends on [Formula: see text], while evaluating action impact does not depend on [Formula: see text]. We perform a careful examination of our approaches in simulation, considering the problem of autonomous navigation in unknown environments, where rAMDL-Tree shows superior performance compared with rAMDL, while determining the same best actions.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie