Journal articles on the topic 'High-dimensional sparse graph'

To see the other types of publications on this topic, follow the link: High-dimensional sparse graph.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'High-dimensional sparse graph.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xie, Anze, Anders Carlsson, Jason Mohoney, Roger Waleffe, Shanan Peters, Theodoros Rekatsinas, and Shivaram Venkataraman. "Demo of marius." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2759–62. http://dx.doi.org/10.14778/3476311.3476338.

Full text
Abstract:
Graph embeddings have emerged as the de facto representation for modern machine learning over graph data structures. The goal of graph embedding models is to convert high-dimensional sparse graphs into low-dimensional, dense and continuous vector spaces that preserve the graph structure properties. However, learning a graph embedding model is a resource intensive process, and existing solutions rely on expensive distributed computation to scale training to instances that do not fit in GPU memory. This demonstration showcases Marius: a new open-source engine for learning graph embedding models over billion-edge graphs on a single machine. Marius is built around a recently-introduced architecture for machine learning over graphs that utilizes pipelining and a novel data replacement policy to maximize GPU utilization and exploit the entire memory hierarchy (including disk, CPU, and GPU memory) to scale to large instances. The audience will experience how to develop, train, and deploy graph embedding models using Marius' configuration-driven programming model. Moreover, the audience will have the opportunity to explore Marius' deployments on applications including link-prediction on WikiKG90M and reasoning queries on a paleobiology knowledge graph. Marius is available as open source software at https://marius-project.org.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Jianyu, Guan Yu, and Yufeng Liu. "Graph-based sparse linear discriminant analysis for high-dimensional classification." Journal of Multivariate Analysis 171 (May 2019): 250–69. http://dx.doi.org/10.1016/j.jmva.2018.12.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Li-e., and Xianxian Li. "A Clustering-Based Bipartite Graph Privacy-Preserving Approach for Sharing High-Dimensional Data." International Journal of Software Engineering and Knowledge Engineering 24, no. 07 (September 2014): 1091–111. http://dx.doi.org/10.1142/s0218194014500363.

Full text
Abstract:
Driven by mutual benefits, there is a demand for transactional data sharing among organizations or parties for research or business analysis purpose. It becomes an essential concern to provide privacy-preserving data sharing and meanwhile maintain data utility, due to the fact that transactional data may contain sensitive personal information. Existing privacy-preserving methods, such as k-anonymity and l-diversity, cannot handle high-dimensional sparse data well, since they would bring about much data distortion in the anonymization process. In this paper, we use bipartite graphs with node attributes to model high-dimensional sparse data, and then propose a privacy-preserving approach for sharing transactional data in a new vision, in which the bipartite graph is anonymized into a weighted bipartite graph by clustering node attributes. Our approach can maintain privacy of the associations between entities and resist certain attackers with knowledge of partial items. Experiments have been performed on real-life data sets to measure the information loss and the accuracy of answering aggregate queries. Experimental results show that the approach improves the balance of performance between privacy protection and data utility.
APA, Harvard, Vancouver, ISO, and other styles
4

Saul, Lawrence K. "A tractable latent variable model for nonlinear dimensionality reduction." Proceedings of the National Academy of Sciences 117, no. 27 (June 22, 2020): 15403–8. http://dx.doi.org/10.1073/pnas.1916012117.

Full text
Abstract:
We propose a latent variable model to discover faithful low-dimensional representations of high-dimensional data. The model computes a low-dimensional embedding that aims to preserve neighborhood relationships encoded by a sparse graph. The model both leverages and extends current leading approaches to this problem. Like t-distributed Stochastic Neighborhood Embedding, the model can produce two- and three-dimensional embeddings for visualization, but it can also learn higher-dimensional embeddings for other uses. Like LargeVis and Uniform Manifold Approximation and Projection, the model produces embeddings by balancing two goals—pulling nearby examples closer together and pushing distant examples further apart. Unlike these approaches, however, the latent variables in our model provide additional structure that can be exploited for learning. We derive an Expectation–Maximization procedure with closed-form updates that monotonically improve the model’s likelihood: In this procedure, embeddings are iteratively adapted by solving sparse, diagonally dominant systems of linear equations that arise from a discrete graph Laplacian. For large problems, we also develop an approximate coarse-graining procedure that avoids the need for negative sampling of nonadjacent nodes in the graph. We demonstrate the model’s effectiveness on datasets of images and text.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Xinyu, Xiaoguang Gao, and Chenfeng Wang. "A Novel BN Learning Algorithm Based on Block Learning Strategy." Sensors 20, no. 21 (November 7, 2020): 6357. http://dx.doi.org/10.3390/s20216357.

Full text
Abstract:
Learning accurate Bayesian Network (BN) structures of high-dimensional and sparse data is difficult because of high computation complexity. To learn the accurate structure for high-dimensional and sparse data faster, this paper adopts a divide and conquer strategy and proposes a block learning algorithm with a mutual information based K-means algorithm (BLMKM algorithm). This method utilizes an improved K-means algorithm to block the nodes in BN and a maximum minimum parents and children (MMPC) algorithm to obtain the whole skeleton of BN and find possible graph structures based on separated blocks. Then, a pruned dynamic programming algorithm is performed sequentially for all possible graph structures to get possible BNs and find the best BN by scoring function. Experiments show that for high-dimensional and sparse data, the BLMKM algorithm can achieve the same accuracy in a reasonable time compared with non-blocking classical learning algorithms. Compared to the existing block learning algorithms, the BLMKM algorithm has a time advantage on the basis of ensuring accuracy. The analysis of the real radar effect mechanism dataset proves that BLMKM algorithm can quickly establish a global and accurate causality model to find the cause of interference, predict the detecting result, and guide the parameters optimization. BLMKM algorithm is efficient for BN learning and has practical application value.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Ying, Xiaojun Xu, and Jianbo Li. "High-Dimensional Sparse Graph Estimation by Integrating DTW-D Into Bayesian Gaussian Graphical Models." IEEE Access 6 (2018): 34279–87. http://dx.doi.org/10.1109/access.2018.2849213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dobson, Andrew, and Kostas Bekris. "Improved Heuristic Search for Sparse Motion Planning Data Structures." Proceedings of the International Symposium on Combinatorial Search 5, no. 1 (September 1, 2021): 196–97. http://dx.doi.org/10.1609/socs.v5i1.18334.

Full text
Abstract:
Sampling-based methods provide efficient, flexible solutions for motion planning, even for complex, high-dimensional systems. Asymptotically optimal planners ensure convergence to the optimal solution, but produce dense structures. This work shows how to extend sparse methods achieving asymptotic near-optimality using multiple-goal heuristic search during graph constuction. The resulting method produces identical output to the existing Incremental Roadmap Spanner approach but in an order of magnitude less time.
APA, Harvard, Vancouver, ISO, and other styles
8

Kefato, Zekarias, and Sarunas Girdzijauskas. "Gossip and Attend: Context-Sensitive Graph Representation Learning." Proceedings of the International AAAI Conference on Web and Social Media 14 (May 26, 2020): 351–59. http://dx.doi.org/10.1609/icwsm.v14i1.7305.

Full text
Abstract:
Graph representation learning (GRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and often sparse graphs. Most studies explore the structure and metadata associated with the graph using random walks and employ an unsupervised or semi-supervised learning schemes. Learning in these methods is context-free, resulting in only a single representation per node. Recently studies have argued on the adequacy of a single representation and proposed context-sensitive approaches, which are capable of extracting multiple node representations for different contexts. This proved to be highly effective in applications such as link prediction and ranking.However, most of these methods rely on additional textual features that require complex and expensive RNNs or CNNs to capture high-level features or rely on a community detection algorithm to identify multiple contexts of a node.In this study we show that in-order to extract high-quality context-sensitive node representations it is not needed to rely on supplementary node features, nor to employ computationally heavy and complex models. We propose Goat, a context-sensitive algorithm inspired by gossip communication and a mutual attention mechanism simply over the structure of the graph. We show the efficacy of Goat using 6 real-world datasets on link prediction and node clustering tasks and compare it against 12 popular and state-of-the-art (SOTA) baselines. Goat consistently outperforms them and achieves up to 12% and 19% gain over the best performing methods on link prediction and clustering tasks, respectively.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Pei Heng, Taeho Lee, and Hee Yong Youn. "Dimensionality Reduction with Sparse Locality for Principal Component Analysis." Mathematical Problems in Engineering 2020 (May 20, 2020): 1–12. http://dx.doi.org/10.1155/2020/9723279.

Full text
Abstract:
Various dimensionality reduction (DR) schemes have been developed for projecting high-dimensional data into low-dimensional representation. The existing schemes usually preserve either only the global structure or local structure of the original data, but not both. To resolve this issue, a scheme called sparse locality for principal component analysis (SLPCA) is proposed. In order to effectively consider the trade-off between the complexity and efficiency, a robust L2,p-norm-based principal component analysis (R2P-PCA) is introduced for global DR, while sparse representation-based locality preserving projection (SR-LPP) is used for local DR. Sparse representation is also employed to construct the weighted matrix of the samples. Being parameter-free, this allows the construction of an intrinsic graph more robust against the noise. In addition, simultaneous learning of projection matrix and sparse similarity matrix is possible. Experimental results demonstrate that the proposed scheme consistently outperforms the existing schemes in terms of clustering accuracy and data reconstruction error.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Dongming, Mingshuo Nie, Hupo Zhang, Zhen Wang, and Dongqi Wang. "Network Embedding Algorithm Taking in Variational Graph AutoEncoder." Mathematics 10, no. 3 (February 2, 2022): 485. http://dx.doi.org/10.3390/math10030485.

Full text
Abstract:
Complex networks with node attribute information are employed to represent complex relationships between objects. Research of attributed network embedding fuses the topology and the node attribute information of the attributed network in the common latent representation space, to encode the high-dimensional sparse network information to the low-dimensional dense vector representation, effectively improving the performance of the network analysis tasks. The current research on attributed network embedding is presently facing problems of high-dimensional sparsity of attribute eigenmatrix and underutilization of attribute information. In this paper, we propose a network embedding algorithm taking in a variational graph autoencoder (NEAT-VGA). This algorithm first pre-processes the attribute features, i.e., the attribute feature learning of the network nodes. Then, the feature learning matrix and the adjacency matrix of the network are fed into the variational graph autoencoder algorithm to obtain the Gaussian distribution of the potential vectors, which more easily generate high-quality node embedding representation vectors. Then, the embedding of the nodes obtained by sampling this Gaussian distribution is reconstructed with structural and attribute losses. The loss function is minimized by iterative training until the low-dimension vector representation, containing network structure information and attribute information of nodes, can be better obtained, and the performance of the algorithm is evaluated by link prediction experimental results.
APA, Harvard, Vancouver, ISO, and other styles
11

Sturtevant, Nathan. "A Sparse Grid Representation for Dynamic Three-Dimensional Worlds." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 7, no. 1 (October 9, 2011): 73–78. http://dx.doi.org/10.1609/aiide.v7i1.12438.

Full text
Abstract:
Grid representations offer many advantages for path planning. Lookups in grids are fast, due to the uniform memory layout, and it is easy to modify grids. But, grids often have significant memory requirements, they cannot directly represent more complex surfaces, and path planning is slower due to their high granularity representation of the world. The speed of path planning on grids has been addressed using abstract representations, such as has been documented in work on Dragon Age: Origins. The abstract representation used in this game was compact, preventing permanent changes to the grid. In this paper we introduce a sparse grid representation, where grid cells are only stored where necessary. From this sparse representation we incrementally build an abstract graph which represents possible movement in the world at a high-level of granularity. This sparse representation also allows the representation of three-dimensional worlds. This representation allows the world to be incrementally changed in under a millisecond, reducing the maximum memory required to store a map and abstraction from Dragon Age: Origins by nearly one megabyte. Fundamentally, the representation allows previously allocated but unused memory to be used in ways that result in higher-quality planning and more intelligent agents.
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Shuang, Bing Liu, and Chen Zhang. "Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing." Computational Intelligence and Neuroscience 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/4920670.

Full text
Abstract:
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Zhan, Qiuqi Ruan, and Gaoyun An. "Face Recognition Using Double Sparse Local Fisher Discriminant Analysis." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/636928.

Full text
Abstract:
Local Fisher discriminant analysis (LFDA) was proposed for dealing with the multimodal problem. It not only combines the idea of locality preserving projections (LPP) for preserving the local structure of the high-dimensional data but also combines the idea of Fisher discriminant analysis (FDA) for obtaining the discriminant power. However, LFDA also suffers from the undersampled problem as well as many dimensionality reduction methods. Meanwhile, the projection matrix is not sparse. In this paper, we propose double sparse local Fisher discriminant analysis (DSLFDA) for face recognition. The proposed method firstly constructs a sparse and data-adaptive graph with nonnegative constraint. Then, DSLFDA reformulates the objective function as a regression-type optimization problem. The undersampled problem is avoided naturally and the sparse solution can be obtained by adding the regression-type problem to al1penalty. Experiments on Yale, ORL, and CMU PIE face databases are implemented to demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
14

Majeed, Abdul, and Sungchang Lee. "A Fast Global Flight Path Planning Algorithm Based on Space Circumscription and Sparse Visibility Graph for Unmanned Aerial Vehicle." Electronics 7, no. 12 (December 2, 2018): 375. http://dx.doi.org/10.3390/electronics7120375.

Full text
Abstract:
This paper proposes a new flight path planning algorithm that finds collision-free, optimal/near-optimal and flyable paths for unmanned aerial vehicles (UAVs) in three-dimensional (3D) environments with fixed obstacles. The proposed algorithm significantly reduces pathfinding computing time without significantly degrading path lengths by using space circumscription and a sparse visibility graph in the pathfinding process. We devise a novel method by exploiting the information about obstacle geometry to circumscribe the search space in the form of a half cylinder from which a working path for UAV can be computed without sacrificing the guarantees on near-optimality and speed. Furthermore, we generate a sparse visibility graph from the circumscribed space and find the initial path, which is subsequently optimized. The proposed algorithm effectively resolves the efficiency and optimality trade-off by searching the path only from the high priority circumscribed space of a map. The simulation results obtained from various maps, and comparison with the existing methods show the effectiveness of the proposed algorithm and verify the aforementioned claims.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Zixuan, Hao Li, Kenli Li, Fan Wu, Lydia Chen, and Keqin Li. "Locality Sensitive Hash Aggregated Nonlinear Neighborhood Matrix Factorization for Online Sparse Big Data Analysis." ACM/IMS Transactions on Data Science 2, no. 4 (November 30, 2021): 1–27. http://dx.doi.org/10.1145/3497749.

Full text
Abstract:
Matrix factorization (MF) can extract the low-rank features and integrate the information of the data manifold distribution from high-dimensional data, which can consider the nonlinear neighborhood information. Thus, MF has drawn wide attention for low-rank analysis of sparse big data, e.g., Collaborative Filtering (CF) Recommender Systems, Social Networks, and Quality of Service. However, the following two problems exist: (1) huge computational overhead for the construction of the Graph Similarity Matrix (GSM) and (2) huge memory overhead for the intermediate GSM. Therefore, GSM-based MF, e.g., kernel MF, graph regularized MF, and so on, cannot be directly applied to the low-rank analysis of sparse big data on cloud and edge platforms. To solve this intractable problem for sparse big data analysis, we propose Locality Sensitive Hashing (LSH) aggregated MF (LSH-MF), which can solve the following problems: (1) The proposed probabilistic projection strategy of LSH-MF can avoid the construction of the GSM. Furthermore, LSH-MF can satisfy the requirement for the accurate projection of sparse big data. (2) To run LSH-MF for fine-grained parallelization and online learning on GPUs, we also propose CULSH-MF, which works on CUDA parallelization. Experimental results show that CULSH-MF can not only reduce the computational time and memory overhead but also obtain higher accuracy. Compared with deep learning models, CULSH-MF can not only save training time but also achieve the same accuracy performance.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Ye, Yongli Hu, and Fei Wu. "Sparse and Low-Rank Subspace Data Clustering with Manifold Regularization Learned by Local Linear Embedding." Applied Sciences 8, no. 11 (November 6, 2018): 2175. http://dx.doi.org/10.3390/app8112175.

Full text
Abstract:
Data clustering is an important research topic in data mining and signal processing communications. In all the data clustering methods, the subspace spectral clustering methods based on self expression model, e.g., the Sparse Subspace Clustering (SSC) and the Low Rank Representation (LRR) methods, have attracted a lot of attention and shown good performance. The key step of SSC and LRR is to construct a proper affinity or similarity matrix of data for spectral clustering. Recently, Laplacian graph constraint was introduced into the basic SSC and LRR and obtained considerable improvement. However, the current graph construction methods do not well exploit and reveal the non-linear properties of the clustering data, which is common for high dimensional data. In this paper, we introduce the classic manifold learning method, the Local Linear Embedding (LLE), to learn the non-linear structure underlying the data and use the learned local geometry of manifold as a regularization for SSC and LRR, which results the proposed LLE-SSC and LLE-LRR clustering methods. Additionally, to solve the complex optimization problem involved in the proposed models, an efficient algorithm is also proposed. We test the proposed data clustering methods on several types of public databases. The experimental results show that our methods outperform typical subspace clustering methods with Laplacian graph constraint.
APA, Harvard, Vancouver, ISO, and other styles
17

Bradley, Patrick, Sina Keller, and Martin Weinmann. "Unsupervised Feature Selection Based on Ultrametricity and Sparse Training Data: A Case Study for the Classification of High-Dimensional Hyperspectral Data." Remote Sensing 10, no. 10 (September 29, 2018): 1564. http://dx.doi.org/10.3390/rs10101564.

Full text
Abstract:
In this paper, we investigate the potential of unsupervised feature selection techniques for classification tasks, where only sparse training data are available. This is motivated by the fact that unsupervised feature selection techniques combine the advantages of standard dimensionality reduction techniques (which only rely on the given feature vectors and not on the corresponding labels) and supervised feature selection techniques (which retain a subset of the original set of features). Thus, feature selection becomes independent of the given classification task and, consequently, a subset of generally versatile features is retained. We present different techniques relying on the topology of the given sparse training data. Thereby, the topology is described with an ultrametricity index. For the latter, we take into account the Murtagh Ultrametricity Index (MUI) which is defined on the basis of triangles within the given data and the Topological Ultrametricity Index (TUI) which is defined on the basis of a specific graph structure. In a case study addressing the classification of high-dimensional hyperspectral data based on sparse training data, we demonstrate the performance of the proposed unsupervised feature selection techniques in comparison to standard dimensionality reduction and supervised feature selection techniques on four commonly used benchmark datasets. The achieved classification results reveal that involving supervised feature selection techniques leads to similar classification results as involving unsupervised feature selection techniques, while the latter perform feature selection independently from the given classification task and thus deliver generally versatile features.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhou, Ruqin, and Wanshou Jiang. "A Ridgeline-Based Terrain Co-registration for Satellite LiDAR Point Clouds in Rough Areas." Remote Sensing 12, no. 13 (July 6, 2020): 2163. http://dx.doi.org/10.3390/rs12132163.

Full text
Abstract:
It is still a completely new and challenging task to register extensive, enormous and sparse satellite light detection and ranging (LiDAR) point clouds. Aimed at this problem, this study provides a ridgeline-based terrain co-registration method in preparation for satellite LiDAR point clouds in rough areas. This method has several merits: (1) only ridgelines are extracted as neighbor information for feature description and their intersections are extracted as keypoints, which can greatly reduce the number of points for subsequent processing, and extracted keypoints is of high repeatability and distinctiveness; (2) a new local-reference frame (LRF) construction method is designed by combining both three dimensional (3D) coordinate and normal vector covariance matrices, which effectively improves its direction consistency; (3) a minimum cost–maximum flow (MCMF) graph-matching strategy is adopted to maximize similarity sum in a sparse-matching graph. It can avoid the problem of “many-to-many” and “one to many” caused by traditional matching strategies; (4) a transformation matrix-based clustering is adopted with a least square (LS)-based registration, where mismatches are eliminated and correct pairs are fully participated in optimal parameters evaluation to improve its stability. Experiments on simulated satellite LiDAR point clouds show that this method can effectively remove mismatches and estimate optimal parameters with high accuracy, especially in rough areas.
APA, Harvard, Vancouver, ISO, and other styles
19

Majeed, Abdul, and Seong Oun Hwang. "Path Planning Method for UAVs Based on Constrained Polygonal Space and an Extremely Sparse Waypoint Graph." Applied Sciences 11, no. 12 (June 8, 2021): 5340. http://dx.doi.org/10.3390/app11125340.

Full text
Abstract:
Finding an optimal/quasi-optimal path for Unmanned Aerial Vehicles (UAVs) utilizing full map information yields time performance degradation in large and complex three-dimensional (3D) urban environments populated by various obstacles. A major portion of the computing time is usually wasted on modeling and exploration of spaces that have a very low possibility of providing optimal/sub-optimal paths. However, computing time can be significantly reduced by searching for paths solely in the spaces that have the highest priority of providing an optimal/sub-optimal path. Many Path Planning (PP) techniques have been proposed, but a majority of the existing techniques equally evaluate many spaces of the maps, including unlikely ones, thereby creating time performance issues. Ignoring high-probability spaces and instead exploring too many spaces on maps while searching for a path yields extensive computing-time overhead. This paper presents a new PP method that finds optimal/quasi-optimal and safe (e.g., collision-free) working paths for UAVs in a 3D urban environment encompassing substantial obstacles. By using Constrained Polygonal Space (CPS) and an Extremely Sparse Waypoint Graph (ESWG) while searching for a path, the proposed PP method significantly lowers pathfinding time complexity without degrading the length of the path by much. We suggest an intelligent method exploiting obstacle geometry information to constrain the search space in a 3D polygon form from which a quasi-optimal flyable path can be found quickly. Furthermore, we perform task modeling with an ESWG using as few nodes and edges from the CPS as possible, and we find an abstract path that is subsequently improved. The results achieved from extensive experiments, and comparison with prior methods certify the efficacy of the proposed method and verify the above assertions.
APA, Harvard, Vancouver, ISO, and other styles
20

Kiesel, Scott, Ethan Burns, and Wheeler Ruml. "Abstraction-Guided Sampling for Motion Planning." Proceedings of the International Symposium on Combinatorial Search 3, no. 1 (August 20, 2021): 162–63. http://dx.doi.org/10.1609/socs.v3i1.18265.

Full text
Abstract:
Motion planning in continuous space is a fundamentalrobotics problem that has been approached from many per-spectives. Rapidly-exploring Random Trees (RRTs) usesampling to efficiently traverse the continuous and high-dimensional state space. Heuristic graph search methods uselower bounds on solution cost to focus effort on portions ofthe space that are likely to be traversed by low-cost solutions.In this work, we bring these two ideas together in a tech-nique called f -biasing: we use estimates of solution cost,computed as in heuristic search, to guide sparse sampling,as in RRTs. We see this new technique as strengthening theconnections between motion planning in robotics and combi-natorial search in artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Zhihao. "A Method of Recommending Physical Education Network Course Resources Based on Collaborative Filtering Technology." Scientific Programming 2021 (October 28, 2021): 1–9. http://dx.doi.org/10.1155/2021/9531111.

Full text
Abstract:
Through the current research on e-learning, it is found that the present e-learning system applied to the recommendation activities of learning resources has only two search methods: Top-N and keywords. These search methods cannot effectively recommend learning resources to learners. Therefore, the collaborative filtering recommendation technology is applied, in this paper, to the process of personalized recommendation of learning resources. We obtain user content and functional interest and predict the comprehensive interest of web and big data through an infinite deep neural network. Based on the collaborative knowledge graph and the collaborative filtering algorithm, the semantic information of teaching network resources is extracted from the collaborative knowledge graph. According to the principles of the nearest neighbor recommendation, the course attribute value preference matrix (APM) is obtained first. Next, the course-predicted values are sorted in descending order, and the top T courses with the highest predicted values are selected as the final recommended course set for the target learners. Each course has its own online classroom; the teacher will publish online class details ahead of time, and students can purchase online access to the classroom number and password. The experimental results show that the optimal number of clusters k is 9. Furthermore, for extremely sparse matrices, the collaborative filtering technique method is more suitable for clustering in the transformed low-dimensional space. The average recommendation satisfaction degree of collaborative filtering technology method is approximately 43.6%, which demonstrates high recommendation quality.
APA, Harvard, Vancouver, ISO, and other styles
22

Javidian, Mohammad Ali, Marco Valtorta, and Pooyan Jamshidi. "AMP Chain Graphs: Minimal Separators and Structure Learning Algorithms." Journal of Artificial Intelligence Research 69 (October 7, 2020): 419–70. http://dx.doi.org/10.1613/jair.1.12101.

Full text
Abstract:
This paper deals with chain graphs (CGs) under the Andersson–Madigan–Perlman (AMP) interpretation. We address the problem of finding a minimal separator in an AMP CG, namely, finding a set Z of nodes that separates a given non-adjacent pair of nodes such that no proper subset of Z separates that pair. We analyze several versions of this problem and offer polynomial time algorithms for each. These include finding a minimal separator from a restricted set of nodes, finding a minimal separator for two given disjoint sets, and testing whether a given separator is minimal. To address the problem of learning the structure of AMP CGs from data, we show that the PC-like algorithm is order dependent, in the sense that the output can depend on the order in which the variables are given. We propose several modifications of the PC-like algorithm that remove part or all of this order-dependence. We also extend the decomposition-based approach for learning Bayesian networks (BNs) to learn AMP CGs, which include BNs as a special case, under the faithfulness assumption. We prove the correctness of our extension using the minimal separator results. Using standard benchmarks and synthetically generated models and data in our experiments demonstrate the competitive performance of our decomposition-based method, called LCD-AMP, in comparison with the (modified versions of) PC-like algorithm. The LCD-AMP algorithm usually outperforms the PC-like algorithm, and our modifications of the PC-like algorithm learn structures that are more similar to the underlying ground truth graphs than the original PC-like algorithm, especially in high-dimensional settings. In particular, we empirically show that the results of both algorithms are more accurate and stabler when the sample size is reasonably large and the underlying graph is sparse
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Junting, Liyun Zhong, and Caiyun Cai. "Using Exponential Kernel for Semi-Supervised Word Sense Disambiguation." Journal of Computational and Theoretical Nanoscience 13, no. 10 (October 1, 2016): 6929–34. http://dx.doi.org/10.1166/jctn.2016.5649.

Full text
Abstract:
Word sense disambiguation (WSD) in natural language text is a fundamental semantic understanding task at the lexical level in natural language processing (NLP) applications. Kernel methods such as support vector machine (SVM) have been successfully applied to WSD. This is mainly due to their relatively high classification accuracy as well as their ability to handle high dimensional and sparse data. A significant challenge in WSD is to reduce the need for labeled training data while maintaining an acceptable performance. In this paper, we present a semi-supervised technique using the exponential kernel for WSD. Specifically, the semantic similarities between terms are first determined with both labeled and unlabeled training data by means of a diffusion process on a graph defined by lexicon and co-occurrence information, and the exponential kernel is then constructed based on the learned semantic similarity. Finally, the SVM classifier trains a model for each class during the training phase and this model is then applied to all test examples in the test phase. The main feature of this approach is that it takes advantage of the exponential kernel to reveal the semantic similarities between terms in an unsupervised manner, which provides a kernel framework for semi-supervised learning. Experiments on several SENSEVAL benchmark data sets demonstrate the proposed approach is sound and effective.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhao, Haitao, Sujay Datta, and Zhong-Hui Duan. "An Integrated Approach of Learning Genetic Networks From Genome-Wide Gene Expression Data Using Gaussian Graphical Model and Monte Carlo Method." Bioinformatics and Biology Insights 17 (January 2023): 117793222311529. http://dx.doi.org/10.1177/11779322231152972.

Full text
Abstract:
Global genetic networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single genes or local networks. The Gaussian graphical model (GGM) is widely applied to learn genetic networks because it defines an undirected graph decoding the conditional dependence between genes. Many algorithms based on the GGM have been proposed for learning genetic network structures. Because the number of gene variables is typically far more than the number of samples collected, and a real genetic network is typically sparse, the graphical lasso implementation of GGM becomes a popular tool for inferring the conditional interdependence among genes. However, graphical lasso, although showing good performance in low dimensional data sets, is computationally expensive and inefficient or even unable to work directly on genome-wide gene expression data sets. In this study, the method of Monte Carlo Gaussian graphical model (MCGGM) was proposed to learn global genetic networks of genes. This method uses a Monte Carlo approach to sample subnetworks from genome-wide gene expression data and graphical lasso to learn the structures of the subnetworks. The learned subnetworks are then integrated to approximate a global genetic network. The proposed method was evaluated with a relatively small real data set of RNA-seq expression levels. The results indicate the proposed method shows a strong ability of decoding the interactions with high conditional dependences among genes. The method was then applied to genome-wide data sets of RNA-seq expression levels. The gene interactions with high interdependence from the estimated global networks show that most of the predicted gene-gene interactions have been reported in the literatures playing important roles in different human cancers. Also, the results validate the ability and reliability of the proposed method to identify high conditional dependences among genes in large-scale data sets.
APA, Harvard, Vancouver, ISO, and other styles
25

Rütimann, Philipp, and Peter Bühlmann. "High dimensional sparse covariance estimation via directed acyclic graphs." Electronic Journal of Statistics 3 (2009): 1133–60. http://dx.doi.org/10.1214/09-ejs534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

A.R., Urakov, and Timeryaev T.V. "All-Pairs Shortest Paths Algorithm for High-dimensional Sparse Graphs." Computer Science and Information Technology 1, no. 3 (November 2013): 233–37. http://dx.doi.org/10.13189/csit.2013.010309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Han, Fang, Xiaoyan Han, Han Liu, and Brian Caffo. "Sparse median graphs estimation in a high-dimensional semiparametric model." Annals of Applied Statistics 10, no. 3 (September 2016): 1397–426. http://dx.doi.org/10.1214/16-aoas940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Songhafouo Tsopméné, Paul Arnaud, and Victor Turchin. "Rational homology and homotopy of high-dimensional string links." Forum Mathematicum 30, no. 5 (September 1, 2018): 1209–35. http://dx.doi.org/10.1515/forum-2016-0192.

Full text
Abstract:
AbstractArone and the second author showed that when the dimensions are in the stable range, the rational homology and homotopy of the high-dimensional analogues of spaces of long knots can be calculated as the homology of a direct sum of finite graph-complexes that they described explicitly. They also showed that these homology and homotopy groups can be interpreted as the higher-order Hochschild homology, also called Hochschild–Pirashvili homology. In this paper, we generalize all these results to high-dimensional analogues of spaces of string links. The methods of our paper are applicable in the range when the ambient dimension is at least twice the maximal dimension of a link component plus two, which in particular guarantees that the spaces under study are connected. However, we conjecture that our homotopy graph-complex computes the rational homotopy groups of link spaces always when the codimension is greater than two, i.e. always when the Goodwillie–Weiss calculus is applicable. Using Haefliger’s approach to calculate the groups of isotopy classes of higher-dimensional links, we confirm our conjecture at the level of {\pi_{0}}.
APA, Harvard, Vancouver, ISO, and other styles
29

Hurley, C. B., and R. W. Oldford. "Graphs as navigational infrastructure for high dimensional data spaces." Computational Statistics 26, no. 4 (February 11, 2011): 585–612. http://dx.doi.org/10.1007/s00180-011-0228-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Shojaie, A., and G. Michailidis. "Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs." Biometrika 97, no. 3 (July 9, 2010): 519–38. http://dx.doi.org/10.1093/biomet/asq038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

ZHANG, WEN, and BRUCE J. WEST. "BOX DIMENSION OF COLORED NOISE AND DETERMINISTIC TIME SERIES IN HIGH DIMENSIONAL EUCLIDEAN SPACES." Fractals 04, no. 01 (March 1996): 91–95. http://dx.doi.org/10.1142/s0218348x9600011x.

Full text
Abstract:
We investigate the box dimension of a time series having an inverse power-law spectra in a high dimensional Euclidean space. The time series can be random (colored noise) or deterministic. Both isotropic and anisotropic cases are included in our investigation. We study both the graph dimension and trail dimension of the time series. We show that with the same inverse power-law spectra, the deterministic series has a lower graph dimension than that of the colored noise, though they both can have fractal dimensions. We also derive a sharp upper bound on the trial dimension of the time series.
APA, Harvard, Vancouver, ISO, and other styles
32

Tan, Zhen, Xiang Zhao, Yang Fang, Bin Ge, and Weidong Xiao. "Knowledge Graph Representation via Similarity-Based Embedding." Scientific Programming 2018 (July 15, 2018): 1–12. http://dx.doi.org/10.1155/2018/6325635.

Full text
Abstract:
Knowledge graph, a typical multi-relational structure, includes large-scale facts of the world, yet it is still far away from completeness. Knowledge graph embedding, as a representation method, constructs a low-dimensional and continuous space to describe the latent semantic information and predict the missing facts. Among various solutions, almost all embedding models have high time and memory-space complexities and, hence, are difficult to apply to large-scale knowledge graphs. Some other embedding models, such as TransE and DistMult, although with lower complexity, ignore inherent features and only use correlations between different entities to represent the features of each entity. To overcome these shortcomings, we present a novel low-complexity embedding model, namely, SimE-ER, to calculate the similarity of entities in independent and associated spaces. In SimE-ER, each entity (relation) is described as two parts. The entity (relation) features in independent space are represented by the features entity (relation) intrinsically owns and, in associated space, the entity (relation) features are expressed by the entity (relation) features they connect. And the similarity between the embeddings of the same entities in different representation spaces is high. In experiments, we evaluate our model with two typical tasks: entity prediction and relation prediction. Compared with the state-of-the-art models, our experimental results demonstrate that SimE-ER outperforms existing competitors and has low time and memory-space complexities.
APA, Harvard, Vancouver, ISO, and other styles
33

Le, Tuan M. V., and Hady W. Lauw. "Semantic Visualization with Neighborhood Graph Regularization." Journal of Artificial Intelligence Research 55 (April 28, 2016): 1091–133. http://dx.doi.org/10.1613/jair.4983.

Full text
Abstract:
Visualization of high-dimensional data, such as text documents, is useful to map out the similarities among various data points. In the high-dimensional space, documents are commonly represented as bags of words, with dimensionality equal to the vocabulary size. Classical approaches to document visualization directly reduce this into visualizable two or three dimensions. Recent approaches consider an intermediate representation in topic space, between word space and visualization space, which preserves the semantics by topic modeling. While aiming for a good fit between the model parameters and the observed data, previous approaches have not considered the local consistency among data instances. We consider the problem of semantic visualization by jointly modeling topics and visualization on the intrinsic document manifold, modeled using a neighborhood graph. Each document has both a topic distribution and visualization coordinate. Specifically, we propose an unsupervised probabilistic model, called SEMAFORE, which aims to preserve the manifold in the lower-dimensional spaces through a neighborhood regularization framework designed for the semantic visualization task. To validate the efficacy of SEMAFORE, our comprehensive experiments on a number of real-life text datasets of news articles and Web pages show that the proposed methods outperform the state-of-the-art baselines on objective evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
34

Madoš, Branislav, Eva Chovancová, Martin Chovanec, and Norbert Ádám. "CSVO: Clustered Sparse Voxel Octrees—A Hierarchical Data Structure for Geometry Representation of Voxelized 3D Scenes." Symmetry 14, no. 10 (October 12, 2022): 2114. http://dx.doi.org/10.3390/sym14102114.

Full text
Abstract:
When representing the geometry of voxelized three-dimensional scenes (especially if they have been voxelized to high resolutions) in a naive—uncompressed—form, one may end up using vast amounts of data. These can easily attack the available memory capacity of the graphics card, the operating memory or even secondary storage of computer. A viable solution to this problem is to use domain-specific hierarchical data structures, based on octant trees or directed acyclic graphs, which, among other advantages, provide a compact binary representation that can thus be considered to be their compressed encoding. These data structures include—inter alia—sparse voxel octrees, sparse voxel directed acyclic graphs and symmetry-aware sparse voxel directed acyclic graphs. The paper deals with the proposal of a new domain-specific hierarchical data structure: the clustered sparse voxel octrees. It is designed to represent the geometry of voxelized three-dimensional scenes and can be constructed using the out-of-core algorithm proposed in the paper. The advantage of the presented data structure is in its compact binary representation, achieved by omitting a significant number of pointers to child nodes (82.55% in case of Angel Lucy model in 1283 voxels resolution) and by using a wider range of child node pointer lengths, including 8b, 16b and 32b. We achieved from 6.57 to 6.82 times more compact encoding, compared to sparse voxel octrees, whose all node components were 32b aligned, and from 4.11 to 4.27 times more compact encoding, when not all node components were 32b aligned.
APA, Harvard, Vancouver, ISO, and other styles
35

Arone, Gregory, and Victor Turchin. "Graph-complexes computing the rational homotopy of high dimensional analogues of spaces of long knots." Annales de l'Institut Fourier 65, no. 1 (2015): 1–62. http://dx.doi.org/10.5802/aif.2924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Shicong, Deyu Yuan, Shuhua Huang, and Yang Chen. "Link Prediction and Node Classification Based on Multitask Graph Autoencoder." Wireless Communications and Mobile Computing 2021 (April 17, 2021): 1–13. http://dx.doi.org/10.1155/2021/5537651.

Full text
Abstract:
The goal of network representation learning is to extract deep-level abstraction from data features that can also be viewed as a process of transforming the high-dimensional data to low-dimensional features. Learning the mapping functions between two vector spaces is an essential problem. In this paper, we propose a new similarity index based on traditional machine learning, which integrates the concepts of common neighbor, local path, and preferential attachment. Furthermore, for applying the link prediction methods to the field of node classification, we have innovatively established an architecture named multitask graph autoencoder. Specifically, in the context of structural deep network embedding, the architecture designs a framework of high-order loss function by calculating the node similarity from multiple angles so that the model can make up for the deficiency of the second-order loss function. Through the parameter fine-tuning, the high-order loss function is introduced into the optimized autoencoder. Proved by the effective experiments, the framework is generally applicable to the majority of classical similarity indexes.
APA, Harvard, Vancouver, ISO, and other styles
37

Bhaduri, Susmita, and Anirban Bhaduri. "Searching for Supersymmetry at LHC Using the Complex-Network-Based Method of the Three-Dimensional Visibility-Graph." Physics 2, no. 3 (September 10, 2020): 436–54. http://dx.doi.org/10.3390/physics2030025.

Full text
Abstract:
For the last several decades, there has been tremendous interest in search for Supersymmetry (SUSY) in the area of high energy physics. At Large Hadron Collider (LHC), there have been continuous searches for SUSY for prompt and non-prompt, for particle R-parity conserving and R-parity violating generation and decays. The limits obtained from these experiments and analyses for detection of the signatures of supersymmetric particles (LSP), revealed greater possibilities of such experiments in the collider. However, these signatures are usually derived under the assumption of bit optimistic conditions of the decaying process of sparticles to the final states. Moreover, SUSY might have been in a disguised state at lower mass-scales as a result of difficult and challenging mass spectra and mixed modes of decays. In this investigation, a novel method of 3-dimensional (3D) Visibility-Graph Analysis is proposed. This is an extension of Visibility Graph analysis of data series to perform the scaling analysis for 3D space. The experimental data spaces analyzed are made up of the component-space (in the X,Y and Z coordinates) of transverse momentum (pT) values taken out from 4-momenta of the signatures of the final state of the pair of mega-jets extracted from the multiJet primary pp collision data from Run B of 2010 at 7 TeV which was used for the search of SUSY using razor filter. The symmetry scaling and the inherent scaling behavior, scale-freeness of multi-particle production process is studied in terms of 3D Power-of-Scale-freeness-of-Visibility-Graph (3D-PSVG) extracted from the 3D Visibility Graphs constructed out of the experimental data spaces. The signature of SUSY may be identified by analyzing the scaling behavior and long-range correlation inherent in the 3D space made up of signatures of final state of multi-particles produced in the pp collision at 7 TeV, for the analysis of SUSY, which the conventional method of analyzing the spectrum of invariant mass or pT may miss.
APA, Harvard, Vancouver, ISO, and other styles
38

Kopitkov, Dmitry, and Vadim Indelman. "No belief propagation required: Belief space planning in high-dimensional state spaces via factor graphs, the matrix determinant lemma, and re-use of calculation." International Journal of Robotics Research 36, no. 10 (August 10, 2017): 1088–130. http://dx.doi.org/10.1177/0278364917721629.

Full text
Abstract:
We develop a computationally efficient approach for evaluating the information-theoretic term within belief space planning (BSP), where during belief propagation the state vector can be constant or augmented. We consider both unfocused and focused problem settings, whereas uncertainty reduction of the entire system or only of chosen variables is of interest, respectively. State-of-the-art approaches typically propagate the belief state, for each candidate action, through calculation of the posterior information (or covariance) matrix and subsequently compute its determinant (required for entropy). In contrast, our approach reduces runtime complexity by avoiding these calculations. We formulate the problem in terms of factor graphs and show that belief propagation is not needed, requiring instead a one-time calculation that depends on (the increasing with time) state dimensionality, and per-candidate calculations that are independent of the latter. To that end, we develop an augmented version of the matrix determinant lemma, and show that computations can be re-used when evaluating impact of different candidate actions. These two key ingredients and the factor graph representation of the problem result in a computationally efficient (augmented) BSP approach that accounts for different sources of uncertainty and can be used with various sensing modalities. We examine the unfocused and focused instances of our approach, and compare it with the state of the art, in simulation and using real-world data, considering problems such as autonomous navigation in unknown environments, measurement selection and sensor deployment. We show that our approach significantly reduces running time without any compromise in performance.
APA, Harvard, Vancouver, ISO, and other styles
39

Xie, Luodi, Huimin Huang, and Qing Du. "A Co-Embedding Model with Variational Auto-Encoder for Knowledge Graphs." Applied Sciences 12, no. 2 (January 12, 2022): 715. http://dx.doi.org/10.3390/app12020715.

Full text
Abstract:
Knowledge graph (KG) embedding has been widely studied to obtain low-dimensional representations for entities and relations. It serves as the basis for downstream tasks, such as KG completion and relation extraction. Traditional KG embedding techniques usually represent entities/relations as vectors or tensors, mapping them in different semantic spaces and ignoring the uncertainties. The affinities between entities and relations are ambiguous when they are not embedded in the same latent spaces. In this paper, we incorporate a co-embedding model for KG embedding, which learns low-dimensional representations of both entities and relations in the same semantic space. To address the issue of neglecting uncertainty for KG components, we propose a variational auto-encoder that represents KG components as Gaussian distributions. In addition, compared with previous methods, our method has the advantages of high quality and interpretability. Our experimental results on several benchmark datasets demonstrate our model’s superiority over the state-of-the-art baselines.
APA, Harvard, Vancouver, ISO, and other styles
40

Chakraborty, Shubhadeep, and Ali Shojaie. "Nonparametric Causal Structure Learning in High Dimensions." Entropy 24, no. 3 (February 28, 2022): 351. http://dx.doi.org/10.3390/e24030351.

Full text
Abstract:
The PC and FCI algorithms are popular constraint-based methods for learning the structure of directed acyclic graphs (DAGs) in the absence and presence of latent and selection variables, respectively. These algorithms (and their order-independent variants, PC-stable and FCI-stable) have been shown to be consistent for learning sparse high-dimensional DAGs based on partial correlations. However, inferring conditional independences from partial correlations is valid if the data are jointly Gaussian or generated from a linear structural equation model—an assumption that may be violated in many applications. To broaden the scope of high-dimensional causal structure learning, we propose nonparametric variants of the PC-stable and FCI-stable algorithms that employ the conditional distance covariance (CdCov) to test for conditional independence relationships. As the key theoretical contribution, we prove that the high-dimensional consistency of the PC-stable and FCI-stable algorithms carry over to general distributions over DAGs when we implement CdCov-based nonparametric tests for conditional independence. Numerical studies demonstrate that our proposed algorithms perform nearly as good as the PC-stable and FCI-stable for Gaussian distributions, and offer advantages in non-Gaussian graphical models.
APA, Harvard, Vancouver, ISO, and other styles
41

Barto, Libor, Jakub Bulín, Andrei Krokhin, and Jakub Opršal. "Algebraic Approach to Promise Constraint Satisfaction." Journal of the ACM 68, no. 4 (July 7, 2021): 1–66. http://dx.doi.org/10.1145/3457606.

Full text
Abstract:
The complexity and approximability of the constraint satisfaction problem (CSP) has been actively studied over the past 20 years. A new version of the CSP, the promise CSP (PCSP), has recently been proposed, motivated by open questions about the approximability of variants of satisfiability and graph colouring. The PCSP significantly extends the standard decision CSP. The complexity of CSPs with a fixed constraint language on a finite domain has recently been fully classified, greatly guided by the algebraic approach, which uses polymorphisms—high-dimensional symmetries of solution spaces—to analyse the complexity of problems. The corresponding classification for PCSPs is wide open and includes some long-standing open questions, such as the complexity of approximate graph colouring, as special cases. The basic algebraic approach to PCSP was initiated by Brakensiek and Guruswami, and in this article, we significantly extend it and lift it from concrete properties of polymorphisms to their abstract properties. We introduce a new class of problems that can be viewed as algebraic versions of the (Gap) Label Cover problem and show that every PCSP with a fixed constraint language is equivalent to a problem of this form. This allows us to identify a “measure of symmetry” that is well suited for comparing and relating the complexity of different PCSPs via the algebraic approach. We demonstrate how our theory can be applied by giving both general and specific hardness/tractability results. Among other things, we improve the state-of-the-art in approximate graph colouring by showing that, for any k ≥ 3, it is NP-hard to find a (2 k -1)-colouring of a given k -colourable graph.
APA, Harvard, Vancouver, ISO, and other styles
42

Hong, Yi, Chuanwen Luo, Deying Li, Zhibo Chen, and Xiyun Wang. "Lifetime-Maximized Strong Barrier Coverage of 3D Camera Sensor Networks." Wireless Communications and Mobile Computing 2022 (September 19, 2022): 1–12. http://dx.doi.org/10.1155/2022/2659901.

Full text
Abstract:
Camera sensor networks (CSNs) have advantages on providing the precise and multimedia information for plenty of applications. The high coverage quality of CSNs especially satisfies the monitoring requirements of barrier coverage. In three-dimensional (3D) application scenarios, the tracking of the potential intruder in the monitored irregular spaces brings more difficulties and challenges on strong barrier coverage for CSNs. In this paper, we consider the strong barrier coverage problem in 3D CSNs and focus on the objective of monitoring the intruder with high resolution and maximizing the network lifetime. We firstly introduce the definition and hardness proof for the problem based on the irregular space model and the network model, which adopts the Region of Interest (ROI) sensing model with high effective resolution. Secondly, we design two sleep-and-awake scheduling algorithms for the problem in homogeneous and heterogeneous networks, respectively, which are based on the auxiliary graph transformation and the disjoint flows construction. To evaluate these algorithms’ performance on the lifetime maximization, we conduct extensive simulation experiments and analyze their results on their advantages and applicable scenarios.
APA, Harvard, Vancouver, ISO, and other styles
43

Varoudis, Tasos, Abigail G. Swenson, Scott D. Kirkton, and James S. Waters. "Exploring nest structures of acorn dwelling ants with X-ray microtomography and surface-based three-dimensional visibility graph analysis." Philosophical Transactions of the Royal Society B: Biological Sciences 373, no. 1753 (July 2, 2018): 20170237. http://dx.doi.org/10.1098/rstb.2017.0237.

Full text
Abstract:
The physical spaces within which organisms live affect their biology and in many cases can be considered part of their extended phenotype. The nests of social insect societies have a fundamental impact on their ability to function as complex superorganisms. Ants in many species excavate elaborate subterranean nests, but others inhabit relatively small pre-formed cavities within rock crevices and hollow seeds. Temnothorax ants, which often nest within acorns, have become a model system for studying collective decision making. While these ants have demonstrated remarkable degrees of rationality and consistent precision with regard to their nest choices, never before has the fine scale internal architecture and spatial organization of their nests been investigated. We used X-ray microtomography to record high-resolution three-dimensional (3D) scans of Temnothorax colonies within their acorns. These data were then quantified using image segmentation and surface-based 3D visibility graph analysis, a new computational methodology for analysing spatial structures. The visibility graph analysis method integrates knowledge from the field of architecture with the empirical study of animal-built structures, thus providing the first methodological cross-disciplinary synergy of these two research areas. We found a surprisingly high surface area and degree of spatial heterogeneity within the acorn nests. Specific regions, such as those associated with the locations of queens and brood, were significantly more conducive to connectivity than others. From an architect's point of view, spatial analysis research has never focused on all-surface 3D movement, as we describe within ant nests. Therefore, we believe our approach will provide new methods for understanding both human design and the comparative biology of habitat spaces. This article is part of the theme issue ‘Interdisciplinary approaches for uncovering the impacts of architecture on collective behaviour'.
APA, Harvard, Vancouver, ISO, and other styles
44

Orlova, Svetlana, and Alexander Lopota. "Scene recognition for confined spaces in mobile robotics: current state and tendencies." Robotics and Technical Cybernetics 10, no. 1 (March 2022): 14–24. http://dx.doi.org/10.31776/rtcj.10102.

Full text
Abstract:
The article discusses the problem of scene recognition for mobile robotics. Subtasks that have to be solved to implement a high-level understanding of the environment are considered. The basis here is an understanding of the geometry and semantics of the scene, which can be decomposed into subtasks of robot localization, mapping and semantic analysis. Simultaneous localization and mapping (SLAM) techniques have already been successfully applied and, although they have some as yet unresolved problems for dynamic environments, do not present a problem for this issue. The focus of the work is on the task of semantic analysis of the scene, which assumes three-dimensional segmentation. The field of 3D segmentation, like the field of image segmentation, has been decomposed into semantic and object segmentation, contrary to the needs of many potential applications. However, at present, panoptic segmentation is beginning to develop, combining the two previous ones and most fully describing the scene. The paper reviews the methods of 3D panoptic segmentation, identifies promising approaches. The actual problems of the scene recognition problem are also discussed. There is a clear trend towards the development of complex incremental methods of metric-semantic SLAM, which combine segmentation with SLAM methods, and the use of scene graphs, which allow describing the geometry, semantics of scene elements and the relationship between them. Scene graphs are especially promising for the field of mobile robotics, since they provide a transition from low-level representations of objects and spaces (for example, segmented point clouds) to describing a scene at a high level of abstraction, close to a human one (a list of objects in a scene, their properties and location relative to each other).
APA, Harvard, Vancouver, ISO, and other styles
45

Baayen, R. Harald, Yu-Ying Chuang, and James P. Blevins. "Inflectional morphology with linear mappings." Mental Lexicon 13, no. 2 (December 31, 2018): 230–68. http://dx.doi.org/10.1075/ml.18010.baa.

Full text
Abstract:
Abstract This methodological study provides a step-by-step introduction to a computational implementation of word and paradigm morphology using linear mappings between vector spaces for form and meaning. Taking as starting point the linear regression model, the main concepts underlying linear mappings are introduced and illustrated with R code. It is then shown how vector spaces can be set up for Latin verb conjugations, using 672 inflected variants of two verbs each from the four main conjugation classes. It turns out that mappings from form to meaning (comprehension), and from meaning to form (production) can be carried out loss-free. This study concludes with a demonstration that when the graph of triphones, the units that underlie the form space, is mapped onto a 2-dimensional space with a self-organising algorithm from physics (graphopt), morphological functions show topological clustering, even though morphemic units do not play any role whatsoever in the model. It follows, first, that evidence for morphemes emerging from experimental studies using, for instance, fMRI, to localize morphemes in the brain, does not guarantee the existence of morphemes in the brain, and second, that potential topological organization of morphological form in the cortex may depend to a high degree on the morphological system of a language.
APA, Harvard, Vancouver, ISO, and other styles
46

WU, XIAODONG. "EFFICIENT ALGORITHMS FOR THE OPTIMAL-RATIO REGION DETECTION PROBLEMS IN DISCRETE GEOMETRY WITH APPLICATIONS." International Journal of Computational Geometry & Applications 19, no. 02 (April 2009): 141–59. http://dx.doi.org/10.1142/s0218195909002873.

Full text
Abstract:
In this paper, we study several interesting optimal-ratio region detection (ORD) problems in d- D (d ≥ 3) discrete geometric spaces, which arise in high dimensional medical image segmentation. Given a d- D voxel grid of n cells, two classes of geometric regions that are enclosed by a single or two coupled smooth heighfield surfaces defined on the entire grid domain are considered. The objective functions are normalized by a function of the desired regions, which avoids a bias to produce an overly large or small region resulting from data noise. The normalization functions that we employ are used in real medical image segmentation. To our best knowledge, no previous results on these problems in high dimensions are known. We develop a unified algorithmic framework based on a careful characterization of the intrinsic geometric structures and a nontrivial graph transformation scheme, yielding efficient polynomial time algorithms for solving these ORD problems. Our main ideas include the following. We observe that the optimal solution to the ORD problems can be obtained via the construction of a convex hull for a set of O(n) unknown 2-D points using the hand probing technique. The probing oracles are implemented by computing a minimum s-t cut in a weighted directed graph. The ORD problems are then solved by O(n) calls to the minimum s-t cut algorithm. For the class of regions bounded by a single heighfield surface, our further investigation shows that the O(n) calls to the minimum s-t cut algorithm are on a monotone parametric flow network, which enables to detect the optimal-ratio region in the complexity of computing a single maximum flow.
APA, Harvard, Vancouver, ISO, and other styles
47

Suzuki, Ikumi, Kazuo Hara, Masashi Shimbo, Yuji Matsumoto, and Marco Saerens. "Investigating the Effectiveness of Laplacian-Based Kernels in Hub Reduction." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1112–18. http://dx.doi.org/10.1609/aaai.v26i1.8295.

Full text
Abstract:
A “hub” is an object closely surrounded by, or very similar to, many other objects in the dataset. Recent studies by Radovanovi´c et al. indicate that in high dimensional spaces, hubs almost always emerge, and objects close to the data centroid tend to become hubs. In this paper, we show that the family of kernels based on the graph Laplacian makes all objects in the dataset equally similar to the centroid, and thus they are expected to make less hubs when used as a similarity measure. We investigate this hypothesis using both synthetic and real-world data. It turns out that these kernels suppress hubs in some cases but not always, and the results seem to be affected by the size of the data—a factor not discussed previously. However, for the datasets in which hubs are indeed reduced by the Laplacian-based kernels, these kernels work well in ranking and classification tasks. This result suggests that the amount of hubs, which can be readily computed in an unsupervised fashion, can be a yardstick of whether Laplacian-based kernels work effectively for a given data.
APA, Harvard, Vancouver, ISO, and other styles
48

Arthur, John Kingsley, Conghua Zhou, Eric Appiah Mantey, Jeremiah Osei-Kwakye, and Yaru Chen. "A Discriminative-Based Geometric Deep Learning Model for Cross Domain Recommender Systems." Applied Sciences 12, no. 10 (May 20, 2022): 5202. http://dx.doi.org/10.3390/app12105202.

Full text
Abstract:
Recommender systems (RS) have been widely deployed in many real-world applications, but usually suffer from the long-standing user/item cold-start problem. As a promising approach, cross-domain recommendation (CDR), which has attracted a surge of interest, aims to transfer the user preferences observed in the source domain to make recommendations in the target domain. Traditional machine learning and deep learning methods are not designed to learn from complex data representations such as graphs, manifolds and 3D objects. However, current trends in data generation include these complex data representations. In addition, existing research works do not consider the complex dimensions and the locality structure of items, which however, contain more discriminative information essential for improving the performance accuracy of the recommender system. Furthermore, similar outcomes between test samples and their neighboring training data restrained in the kernel space are not fully realized from the recommended objects belonging to the same object category to capture the embedded discriminative information effectively. These challenges leave the problem of sparsity and the cold-start of items/users unsolved and hence impede the performance of the cross-domain recommender system, causing it to suggest less relevant and undistinguished items to the user. To handle these challenges, we propose a novel deep learning (DL) method, Discriminative Geometric Deep Learning (D-GDL) for cross-domain recommender systems. In the proposed D-GDL, a discriminative function based on sparse local sensitivity is introduced into the structure of the DL network. In the D-GDL, a local representation learning (i.e., a local sensitivity-based deep convolutional belief network) is introduced into the structure of the DL network to effectively capture the local geometric and visual information from the structure of the recommended 3D objects. A kernel-based method (i.e., a local sensitivity deep belief network) is also incorporated into the structure of the DL framework to map the complex structure of recommended objects into high dimensional feature space and achieve an effective recognition result. An improved kernel density estimator is created to serve as a weighing function in building a high dimensional feature space, which makes it more resistant to geometric noise and computation performance. The experiment results show that the proposed D-GDL significantly outperforms the state-of-the-art methods in both sparse and dense settings for cross-domain recommendation tasks.
APA, Harvard, Vancouver, ISO, and other styles
49

Schmitz, M. A., J. L. Starck, F. Ngole Mboula, N. Auricchio, J. Brinchmann, R. I. Vito Capobianco, R. Clédassou, et al. "Euclid: Nonparametric point spread function field recovery through interpolation on a graph Laplacian." Astronomy & Astrophysics 636 (April 2020): A78. http://dx.doi.org/10.1051/0004-6361/201936094.

Full text
Abstract:
Context. Future weak lensing surveys, such as the Euclid mission, will attempt to measure the shapes of billions of galaxies in order to derive cosmological information. These surveys will attain very low levels of statistical error, and systematic errors must be extremely well controlled. In particular, the point spread function (PSF) must be estimated using stars in the field, and recovered with high accuracy. Aims. The aims of this paper are twofold. Firstly, we took steps toward a nonparametric method to address the issue of recovering the PSF field, namely that of finding the correct PSF at the position of any galaxy in the field, applicable to Euclid. Our approach relies solely on the data, as opposed to parametric methods that make use of our knowledge of the instrument. Secondly, we studied the impact of imperfect PSF models on the shape measurement of galaxies themselves, and whether common assumptions about this impact hold true in an Euclid scenario. Methods. We extended the recently proposed resolved components analysis approach, which performs super-resolution on a field of under-sampled observations of a spatially varying, image-valued function. We added a spatial interpolation component to the method, making it a true 2-dimensional PSF model. We compared our approach to PSFEx, then quantified the impact of PSF recovery errors on galaxy shape measurements through image simulations. Results. Our approach yields an improvement over PSFEx in terms of the PSF model and on observed galaxy shape errors, though it is at present far from reaching the required Euclid accuracy. We also find that the usual formalism used for the propagation of PSF model errors to weak lensing quantities no longer holds in the case of an Euclid-like PSF. In particular, different shape measurement approaches can react differently to the same PSF modeling errors.
APA, Harvard, Vancouver, ISO, and other styles
50

Kopitkov, Dmitry, and Vadim Indelman. "General-purpose incremental covariance update and efficient belief space planning via a factor-graph propagation action tree." International Journal of Robotics Research 38, no. 14 (September 10, 2019): 1644–73. http://dx.doi.org/10.1177/0278364919875199.

Full text
Abstract:
Fast covariance calculation is required both for simultaneous localization and mapping (SLAM; e.g., in order to solve data association) and for evaluating the information-theoretic term for different candidate actions in belief space planning (BSP). In this article, we make two primary contributions. First, we develop a novel general-purpose incremental covariance update technique, which efficiently recovers specific covariance entries after any change in probabilistic inference, such as the introduction of new observations/variables or relinearization. Our approach is shown to recover them faster than other state-of-the-art methods. Second, we present a computationally efficient approach for BSP in high-dimensional state spaces, leveraging our incremental covariance update method. State-of-the-art BSP approaches perform belief propagation for each candidate action and then evaluate an objective function that typically includes an information-theoretic term, such as entropy or information gain. Yet, candidate actions often have similar parts (e.g., common trajectory parts), which are however evaluated separately for each candidate. Moreover, calculating the information-theoretic term involves a costly determinant computation of the entire information (covariance) matrix, which is [Formula: see text] with [Formula: see text] being dimension of the state or costly Schur complement operations if only marginal posterior covariance of certain variables is of interest. Our approach, rAMDL-Tree, extends our previous BSP method rAMDL, by exploiting incremental covariance calculation and performing calculation reuse between common parts of non-myopic candidate actions, such that these parts are evaluated only once, in contrast to existing approaches. To that end, we represent all candidate actions together in a single unified graphical model, which we introduce and call a factor-graph propagation (FGP) action tree. Each arrow (edge) of the FGP action tree represents a sub-action of one (or more) candidate action sequence(s) and in order to evaluate its information impact we require specific covariance entries of an intermediate belief represented by the tree’s vertex from which the edge is coming out (e.g., tail of the arrow). Overall, our approach has only a one-time calculation that depends on [Formula: see text], while evaluating action impact does not depend on [Formula: see text]. We perform a careful examination of our approaches in simulation, considering the problem of autonomous navigation in unknown environments, where rAMDL-Tree shows superior performance compared with rAMDL, while determining the same best actions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography