To see the other types of publications on this topic, follow the link: Euclidean networks.

Journal articles on the topic 'Euclidean networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Euclidean networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xuan, Qi, Xiaodi Ma, Chenbo Fu, Hui Dong, Guijun Zhang, and Li Yu. "Heterogeneous multidimensional scaling for complex networks." International Journal of Modern Physics C 26, no. 02 (February 2015): 1550023. http://dx.doi.org/10.1142/s0129183115500230.

Full text
Abstract:
Many real-world networks are essentially heterogeneous, where the nodes have different abilities to gain connections. Such networks are difficult to be embedded into low-dimensional Euclidean space if we ignore the heterogeneity and treat all the nodes equally. In this paper, based on a newly defined heterogeneous distance and a generalized network distance under the constraints of network and triangle inequalities, respectively, we propose a new heterogeneous multidimensional scaling method (HMDS) to embed different networks into proper Euclidean spaces. We find that HMDS behaves much better than the traditional multidimensional scaling method (MDS) in embedding different artificial and real-world networks into Euclidean spaces. Besides, we also propose a method to estimate the appropriate dimensions of Euclidean spaces for different networks, and find that the estimated dimensions are quite close to the real dimensions for those geometrical networks under study. These methods thus can help to better understand the evolution of real-world networks, and have practical importance in network visualization, community detection, link prediction and localization of wireless sensors.
APA, Harvard, Vancouver, ISO, and other styles
2

Xing, Chenjie, Yuan Zhou, Yinan Peng, Jieke Hao, and Shuoshi Li. "Specific Emitter Identification Based on Ensemble Neural Network and Signal Graph." Applied Sciences 12, no. 11 (May 28, 2022): 5496. http://dx.doi.org/10.3390/app12115496.

Full text
Abstract:
Specific emitter identification (SEI) is a technology for extracting fingerprint features from a signal and identifying the emitter. In this paper, the author proposes an SEI method based on ensemble neural networks (ENN) and signal graphs, with the following innovations: First, a signal graph is used to show signal data in a non-Euclidean space. Namely, sequence signal data is constructed into a signal graph to transform the sequence signal from a Euclidian space to a non-Euclidean space. Hence, the graph feature (the feature of the non-Euclidean space) of the signal can be extracted from the signal graph. Second, the ensemble neural network is integrated with a graph feature extractor and a sequence feature extractor, making it available to extract both graph and sequence simultaneously. This ensemble neural network also fuses graph features with sequence features, obtaining an ensemble feature that has both features in Euclidean space and non-Euclidean space. Therefore, the ensemble feature contains more effective information for the identification of the emitter. The study results demonstrate that this SEI method has higher SEI accuracy and robustness than traditional machine learning methods and common deep learning methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Shao-Lun, Changho Suh, and Lizhong Zheng. "Euclidean Information Theory of Networks." IEEE Transactions on Information Theory 61, no. 12 (December 2015): 6795–814. http://dx.doi.org/10.1109/tit.2015.2484066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Carlsson, John Gunnar, and Fan Jia. "Euclidean Hub-and-Spoke Networks." Operations Research 61, no. 6 (December 2013): 1360–82. http://dx.doi.org/10.1287/opre.2013.1219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Wei, Guangmin Hu, and Fucai Yu. "An Unsupervised Learning Method for Attributed Network Based on Non-Euclidean Geometry." Symmetry 13, no. 5 (May 19, 2021): 905. http://dx.doi.org/10.3390/sym13050905.

Full text
Abstract:
Many real-world networks can be modeled as attributed networks, where nodes are affiliated with attributes. When we implement attributed network embedding, we need to face two types of heterogeneous information, namely, structural information and attribute information. The structural information of undirected networks is usually expressed as a symmetric adjacency matrix. Network embedding learning is to utilize the above information to learn the vector representations of nodes in the network. How to integrate these two types of heterogeneous information to improve the performance of network embedding is a challenge. Most of the current approaches embed the networks in Euclidean spaces, but the networks themselves are non-Euclidean. As a consequence, the geometric differences between the embedded space and the underlying space of the network will affect the performance of the network embedding. According to the non-Euclidean geometry of networks, this paper proposes an attributed network embedding framework based on hyperbolic geometry and the Ricci curvature, namely, RHAE. Our method consists of two modules: (1) the first module is an autoencoder module in which each layer is provided with a network information aggregation layer based on the Ricci curvature and an embedding layer based on hyperbolic geometry; (2) the second module is a skip-gram module in which the random walk is based on the Ricci curvature. These two modules are based on non-Euclidean geometry, but they fuse the topology information and attribute information in the network from different angles. Experimental results on some benchmark datasets show that our approach outperforms the baselines.
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Xinzheng, Xiaoyang Zhao, Meng Wei, and Zhongnian Li. "A comprehensive review of graph convolutional networks: approaches and applications." Electronic Research Archive 31, no. 7 (2023): 4185–215. http://dx.doi.org/10.3934/era.2023213.

Full text
Abstract:
<abstract> <p>Convolutional neural networks (CNNs) utilize local translation invariance in the Euclidean domain and have remarkable achievements in computer vision tasks. However, there are many data types with non-Euclidean structures, such as social networks, chemical molecules, knowledge graphs, etc., which are crucial to real-world applications. The graph convolutional neural network (GCN), as a derivative of CNNs for non-Euclidean data, was established for non-Euclidean graph data. In this paper, we mainly survey the progress of GCNs and introduce in detail several basic models based on GCNs. First, we review the challenges in building GCNs, including large-scale graph data, directed graphs and multi-scale graph tasks. Also, we briefly discuss some applications of GCNs, including computer vision, transportation networks and other fields. Furthermore, we point out some open issues and highlight some future research trends for GCNs.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
7

Liang, Fan, Cheng Qian, Wei Yu, David Griffith, and Nada Golmie. "Survey of Graph Neural Networks and Applications." Wireless Communications and Mobile Computing 2022 (July 28, 2022): 1–18. http://dx.doi.org/10.1155/2022/9261537.

Full text
Abstract:
The advance of deep learning has shown great potential in applications (speech, image, and video classification). In these applications, deep learning models are trained by datasets in Euclidean space with fixed dimensions and sequences. Nonetheless, the rapidly increasing demands on analyzing datasets in non-Euclidean space require additional research. Generally speaking, finding the relationships of elements in datasets and representing such relationships as weighted graphs consisting of vertices and edges is a viable way of analyzing datasets in non-Euclidean space. However, analyzing the weighted graph-based dataset is a challenging problem in existing deep learning models. To address this issue, graph neural networks (GNNs) leverage spectral and spatial strategies to extend and implement convolution operations in non-Euclidean space. Based on graph theory, a number of enhanced GNNs are proposed to deal with non-Euclidean datasets. In this study, we first review the artificial neural networks and GNNs. We then present ways to extend deep learning models to deal with datasets in non-Euclidean space and introduce the GNN-based approaches based on spectral and spatial strategies. Furthermore, we discuss some typical Internet of Things (IoT) applications that employ spectral and spatial convolution strategies, followed by the limitations of GNNs in the current stage.
APA, Harvard, Vancouver, ISO, and other styles
8

Gao, Baojian, Xiaoning Zhao, Jun Wang, and Xiaojiang Chen. "Decomposition Based Localization for Anisotropic Sensor Networks." International Journal of Distributed Sensor Networks 2015 (2015): 1–16. http://dx.doi.org/10.1155/2015/805061.

Full text
Abstract:
Range-free localization algorithms have caused widespread attention due to their low cost and low power consumption. However, such schemes heavily depend on the assumption that the hop count distance between two nodes correlates well with their Euclidean distance, which will be satisfied only in isotropic networks. When the network is anisotropic, holes or obstacles will lead to the estimated distance between nodes deviating from their Euclidean distance, causing a serious decline in localization accuracy. This paper develops HCD-DV-Hop for node localization in anisotropic sensor networks. HCD-DV-Hop consists of two steps. Firstly, an anisotropic network is decomposed into several different isotropic subnetworks, by using the proposed Hop Count Based Decomposition (HCD) scheme. Secondly, DV-Hop algorithm is carried out in each subnetwork for node localization. HCD first uses concave/convex node recognition algorithm and cleansing criterion to obtain the optimal concave and convex nodes based on boundary recognition, followed by segmentation of the network’s boundary. Finally, the neighboring boundary nodes of the optimal concave nodes flood the network with decomposition messages; thus, an anisotropic network is decomposed. Extensive simulations demonstrated that, compared with range-free DV-Hop algorithm, HCD-DV-Hop can effectively reduce localization error in anisotropic networks without increasing the complexity of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Trietsch, Dan. "Augmenting Euclidean Networks—the Steiner Case." SIAM Journal on Applied Mathematics 45, no. 5 (October 1985): 855–60. http://dx.doi.org/10.1137/0145051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kartun-Giles, Alexander, Suhanya Jayaprakasam, and Sunwoo Kim. "Euclidean Matchings in Ultra-Dense Networks." IEEE Communications Letters 22, no. 6 (June 2018): 1216–19. http://dx.doi.org/10.1109/lcomm.2018.2799207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Barnett, George A., and Ronald E. Rice. "Longitudinal non-euclidean networks: Applying Galileo." Social Networks 7, no. 4 (December 1985): 287–322. http://dx.doi.org/10.1016/0378-8733(85)90010-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Maxwell, Alastair, and Konrad J. Swanepoel. "Shortest Directed Networks in the Plane." Graphs and Combinatorics 36, no. 5 (June 12, 2020): 1457–75. http://dx.doi.org/10.1007/s00373-020-02183-8.

Full text
Abstract:
Abstract Given a set of sources and a set of sinks as points in the Euclidean plane, a directed network is a directed graph drawn in the plane with a directed path from each source to each sink. Such a network may contain nodes other than the given sources and sinks, called Steiner points. We characterize the local structure of the Steiner points in all shortest-length directed networks in the Euclidean plane. This characterization implies that these networks are constructible by straightedge and compass. Our results build on unpublished work of Alfaro, Campbell, Sher, and Soto from 1989 and 1990. Part of the proof is based on a new method that uses other norms in the plane. This approach gives more conceptual proofs of some of their results, and as a consequence, we also obtain results on shortest directed networks for these norms.
APA, Harvard, Vancouver, ISO, and other styles
13

Hordan, Snir, Tal Amir, Steven J. Gortler, and Nadav Dym. "Complete Neural Networks for Complete Euclidean Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12482–90. http://dx.doi.org/10.1609/aaai.v38i11.29141.

Full text
Abstract:
Neural networks for point clouds, which respect their natural invariance to permutation and rigid motion, have enjoyed recent success in modeling geometric phenomena, from molecular dynamics to recommender systems. Yet, to date, no architecture with polynomial complexity is known to be complete, that is, able to distinguish between any pair of non-isomorphic point clouds. We fill this theoretical gap by showing that point clouds can be completely determined, up to permutation and rigid motion, by applying the 3-WL graph isomorphism test to the point cloud's centralized Gram matrix. Moreover, we formulate an Euclidean variant of the 2-WL test and show that it is also sufficient to achieve completeness. We then show how our complete Euclidean WL tests can be simulated by an Euclidean graph neural network of moderate size and demonstrate their separation capability on highly symmetrical point clouds.
APA, Harvard, Vancouver, ISO, and other styles
14

Jiang, Bin, Xinyu Wang, Li Huang, and Jian Xiao. "DeepGCNs-Att: Point cloud semantic segmentation with contextual point representations." Journal of Intelligent & Fuzzy Systems 42, no. 4 (March 4, 2022): 3827–36. http://dx.doi.org/10.3233/jifs-212030.

Full text
Abstract:
Graph Convolutional Networks are able to characterize non-Euclidean spaces effectively compared with traditional Convolutional Neural Networks, which can extract the local features of the point cloud using deep neural networks, but it cannot make full use of the global features of the point cloud for semantic segmentation. To solve this problem, this paper proposes a novel network structure called DeepGCNs-Att that enables deep Graph Convolutional Network to aggregate global context features efficiently. Moreover, to speed up the computation, we add an Attention layer after the Graph Convolutional Network Backbone Block to mutually enhance the connection between the distant points of the non-Euclidean space. Our model is tested on the standard benchmark S3DIS. By comparing with other deep Graph Convolutional Networks, our DeepGCNs-Att’s mIoU has at least two percent higher than that of all other models and even shows excellent results in space complexity and computational complexity under the same number of Graph Convolutional Network layers.
APA, Harvard, Vancouver, ISO, and other styles
15

Dias, Ana Paula S., and Eliana Manuel Pinho. "Enumerating periodic patterns of synchrony via finite bidirectional networks." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 466, no. 2115 (November 16, 2009): 891–910. http://dx.doi.org/10.1098/rspa.2009.0404.

Full text
Abstract:
Periodic patterns of synchrony are lattice networks whose cells are coloured according to a local rule, or balanced colouring, and such that the overall system has spatial periodicity. These patterns depict the finite-dimensional flow-invariant subspaces for all the lattice dynamical systems, in the given lattice network, that exhibit those periods. Previous results relate the existence of periodic patterns of synchrony, in n -dimensional Euclidean lattice networks with nearest neighbour coupling architecture, with that of finite coupled cell networks that follow the same colouring rule and have all the couplings bidirectional. This paper addresses the relation between periodic patterns of synchrony and finite bidirectional coloured networks. Given an n -dimensional Euclidean lattice network with nearest neighbour coupling architecture, and a colouring rule with k colours, we enumerate all the periodic patterns of synchrony generated by a given finite network, or graph. This enumeration is constructive and based on the automorphisms group of the graph.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Lili, Chongyang Gao, Chenghan Huang, Ruibo Liu, Weicheng Ma, and Soroush Vosoughi. "Embedding Heterogeneous Networks into Hyperbolic Space Without Meta-path." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 10147–55. http://dx.doi.org/10.1609/aaai.v35i11.17217.

Full text
Abstract:
Networks found in the real-world are numerous and varied. A common type of network is the heterogeneous network, where the nodes (and edges) can be of different types. Accordingly, there have been efforts at learning representations of these heterogeneous networks in low-dimensional space. However, most of the existing heterogeneous network embedding suffers from the following two drawbacks: (1) The target space is usually Euclidean. Conversely, many recent works have shown that complex networks may have hyperbolic latent anatomy, which is non-Euclidean. (2) These methods usually rely on meta-paths, which requires domain-specific prior knowledge for meta-path selection. Additionally, different down-streaming tasks on the same network might require different meta-paths in order to generate task-specific embeddings. In this paper, we propose a novel self-guided random walk method that does not require meta-path for embedding heterogeneous networks into hyperbolic space. We conduct thorough experiments for the tasks of network reconstruction and link prediction on two public datasets, showing that our model outperforms a variety of well-known baselines across all tasks.
APA, Harvard, Vancouver, ISO, and other styles
17

Weng, J. F. "Determining shortest networks in the Euclidean plane." Bulletin of the Australian Mathematical Society 49, no. 2 (April 1994): 349–50. http://dx.doi.org/10.1017/s0004972700016427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Skiscim, Christopher C., and Bruce L. Golden. "Computingk-shortest path lengths in euclidean networks." Networks 17, no. 3 (1987): 341–52. http://dx.doi.org/10.1002/net.3230170308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Yang, Liang Hu, and Juncheng Hu. "Modeling Tree-like Heterophily on Symmetric Matrix Manifolds." Entropy 26, no. 5 (April 29, 2024): 377. http://dx.doi.org/10.3390/e26050377.

Full text
Abstract:
Tree-like structures, characterized by hierarchical relationships and power-law distributions, are prevalent in a multitude of real-world networks, ranging from social networks to citation networks and protein–protein interaction networks. Recently, there has been significant interest in utilizing hyperbolic space to model these structures, owing to its capability to represent them with diminished distortions compared to flat Euclidean space. However, real-world networks often display a blend of flat, tree-like, and circular substructures, resulting in heterophily. To address this diversity of substructures, this study aims to investigate the reconstruction of graph neural networks on the symmetric manifold, which offers a comprehensive geometric space for more effective modeling of tree-like heterophily. To achieve this objective, we propose a graph convolutional neural network operating on the symmetric positive-definite matrix manifold, leveraging Riemannian metrics to facilitate the scheme of information propagation. Extensive experiments conducted on semi-supervised node classification tasks validate the superiority of the proposed approach, demonstrating that it outperforms comparative models based on Euclidean and hyperbolic geometries.
APA, Harvard, Vancouver, ISO, and other styles
20

Bi, Xin, Zhixun Liu, Yao He, Xiangguo Zhao, Yongjiao Sun, and Hao Liu. "GNEA: A Graph Neural Network with ELM Aggregator for Brain Network Classification." Complexity 2020 (October 29, 2020): 1–11. http://dx.doi.org/10.1155/2020/8813738.

Full text
Abstract:
Brain networks provide essential insights into the diagnosis of functional brain disorders, such as Alzheimer’s disease (AD). Many machine learning methods have been applied to learn from brain images or networks in Euclidean space. However, it is still challenging to learn complex network structures and the connectivity of brain regions in non-Euclidean space. To address this problem, in this paper, we exploit the study of brain network classification from the perspective of graph learning. We propose an aggregator based on extreme learning machine (ELM) that boosts the aggregation ability and efficiency of graph convolution without iterative tuning. Then, we design a graph neural network named GNEA (Graph Neural Network with ELM Aggregator) for the graph classification task. Extensive experiments are conducted using a real-world AD detection dataset to evaluate and compare the graph learning performances of GNEA and state-of-the-art graph learning methods. The results indicate that GNEA achieves excellent learning performance with the best graph representation ability in brain network classification applications.
APA, Harvard, Vancouver, ISO, and other styles
21

Sen, Parongama. "Phase Transitions in Euclidean Networks: A Mini-Review." Physica Scripta T106, no. 1 (2003): 55. http://dx.doi.org/10.1238/physica.topical.106a00055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Cáceres, J., D. Garijo, A. González, A. Márquez, M. L. Puertas, and P. Ribeiro. "Shortcut sets for plane Euclidean networks (Extended abstract)." Electronic Notes in Discrete Mathematics 54 (October 2016): 163–68. http://dx.doi.org/10.1016/j.endm.2016.09.029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lee, Jong-Ho. "Minimum Euclidean distance evaluation using deep neural networks." AEU - International Journal of Electronics and Communications 112 (December 2019): 152964. http://dx.doi.org/10.1016/j.aeue.2019.152964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ahsan, Ahmad Omar, Susanna Tang, and Wei Peng. "Efficient Hyperbolic Perceptron for Image Classification." Electronics 12, no. 19 (September 25, 2023): 4027. http://dx.doi.org/10.3390/electronics12194027.

Full text
Abstract:
Deep neural networks, often equipped with powerful auto-optimization tools, find widespread use in diverse domains like NLP and computer vision. However, traditional neural architectures come with specific inductive biases, designed to reduce parameter search space, cut computational costs, or introduce domain expertise into the network design. In contrast, multilayer perceptrons (MLPs) offer greater freedom and lower inductive bias than convolutional neural networks (CNNs), making them versatile for learning complex patterns. Despite their flexibility, most neural architectures operate in a flat Euclidean space, which may not be optimal for various data types, particularly those with hierarchical correlations. In this paper, we move one step further by introducing the hyperbolic Res-MLP (HR-MLP), an architecture extending the attention-free MLP to a non-Euclidean space. HR-MLP leverages fully hyperbolic layers for feature embeddings and end-to-end image classification. Our novel Lorentz cross-patch and cross-channel layers enable direct hyperbolic operations with fewer parameters, facilitating faster training and superior performance compared to Euclidean counterparts. Experimental results on CIFAR10, CIFAR100, and MiniImageNet confirm HR-MLP’s competitive and improved performance.
APA, Harvard, Vancouver, ISO, and other styles
25

SOARES, DANYEL J. B., JOSÉ S. ANDRADE, HANS J. HERRMANN, and LUCIANO R. da SILVA. "THREE-DIMENSIONAL APOLLONIAN NETWORKS." International Journal of Modern Physics C 17, no. 08 (August 2006): 1219–26. http://dx.doi.org/10.1142/s0129183106009175.

Full text
Abstract:
We discuss the three-dimensional Apollonian network introduced by Andrade et al.1 for the two-dimensional case. These networks are simultaneously scale-free, small world, Euclidean, space-filling and matching graphs and have a wide range of applications going from the description of force chains in polydisperse granular packings to the geometry of fully fragmented porous media. Some of the properties of these networks, namely, the connectivity exponent, the clustering coefficient, the shortest path, and vertex betweenness are calculated and found to be particularly rich.
APA, Harvard, Vancouver, ISO, and other styles
26

Bae, Ji-Hun, Gwang-Hyun Yu, Ju-Hwan Lee, Dang Thanh Vu, Le Hoang Anh, Hyoung-Gook Kim, and Jin-Young Kim. "Superpixel Image Classification with Graph Convolutional Neural Networks Based on Learnable Positional Embedding." Applied Sciences 12, no. 18 (September 13, 2022): 9176. http://dx.doi.org/10.3390/app12189176.

Full text
Abstract:
Graph convolutional neural networks (GCNNs) have been successfully applied to a wide range of problems, including low-dimensional Euclidean structural domains representing images, videos, and speech and high-dimensional non-Euclidean domains, such as social networks and chemical molecular structures. However, in computer vision, the existing GCNNs are not provided with positional information to distinguish between graphs of new structures; therefore, the performance of the image classification domain represented by arbitrary graphs is significantly poor. In this work, we introduce how to initialize the positional information through a random walk algorithm and continuously learn the additional position-embedded information of various graph structures represented over the superpixel images we choose for efficiency. We call this method the graph convolutional network with learnable positional embedding applied on images (IMGCN-LPE). We apply IMGCN-LPE to three graph convolutional models (the Chebyshev graph convolutional network, graph convolutional network, and graph attention network) to validate performance on various benchmark image datasets. As a result, although not as impressive as convolutional neural networks, the proposed method outperforms various other conventional convolutional methods and demonstrates its effectiveness among the same tasks in the field of GCNNs.
APA, Harvard, Vancouver, ISO, and other styles
27

Fang, Jinyuan, Shangsong Liang, Zaiqiao Meng, and Maarten De Rijke. "Hyperspherical Variational Co-embedding for Attributed Networks." ACM Transactions on Information Systems 40, no. 3 (July 31, 2022): 1–36. http://dx.doi.org/10.1145/3478284.

Full text
Abstract:
Network-based information has been widely explored and exploited in the information retrieval literature. Attributed networks, consisting of nodes, edges as well as attributes describing properties of nodes, are a basic type of network-based data, and are especially useful for many applications. Examples include user profiling in social networks and item recommendation in user-item purchase networks. Learning useful and expressive representations of entities in attributed networks can provide more effective building blocks to down-stream network-based tasks such as link prediction and attribute inference. Practically, input features of attributed networks are normalized as unit directional vectors. However, most network embedding techniques ignore the spherical nature of inputs and focus on learning representations in a Gaussian or Euclidean space, which, we hypothesize, might lead to less effective representations. To obtain more effective representations of attributed networks, we investigate the problem of mapping an attributed network with unit normalized directional features into a non-Gaussian and non-Euclidean space. Specifically, we propose a hyperspherical variational co-embedding for attributed networks (HCAN), which is based on generalized variational auto-encoders for heterogeneous data with multiple types of entities. HCAN jointly learns latent embeddings for both nodes and attributes in a unified hyperspherical space such that the affinities between nodes and attributes can be captured effectively. We argue that this is a crucial feature in many real-world applications of attributed networks. Previous Gaussian network embedding algorithms break the assumption of uninformative prior, which leads to unstable results and poor performance. In contrast, HCAN embeds nodes and attributes as von Mises-Fisher distributions, and allows one to capture the uncertainty of the inferred representations. Experimental results on eight datasets show that HCAN yields better performance in a number of applications compared with nine state-of-the-art baselines.
APA, Harvard, Vancouver, ISO, and other styles
28

Wu, Wei, and Xuemeng Zhai. "DyLFG: A Dynamic Network Learning Framework Based on Geometry." Entropy 25, no. 12 (November 30, 2023): 1611. http://dx.doi.org/10.3390/e25121611.

Full text
Abstract:
Dynamic network representation learning has recently attracted increasing attention because real-world networks evolve over time, that is nodes and edges join or leave the networks over time. Different from static networks, the representation learning of dynamic networks should not only consider how to capture the structural information of network snapshots, but also consider how to capture the temporal dynamic information of network structure evolution from the network snapshot sequence. From the existing work on dynamic network representation, there are two main problems: (1) A significant number of methods target dynamic networks, which only allow nodes to increase over time, not decrease, which reduces the applicability of such methods to real-world networks. (2) At present, most network-embedding methods, especially dynamic network representation learning approaches, use Euclidean embedding space. However, the network itself is geometrically non-Euclidean, which leads to geometric inconsistencies between the embedded space and the underlying space of the network, which can affect the performance of the model. In order to solve the above two problems, we propose a geometry-based dynamic network learning framework, namely DyLFG. Our proposed framework targets dynamic networks, which allow nodes and edges to join or exit the network over time. In order to extract the structural information of network snapshots, we designed a new hyperbolic geometry processing layer, which is different from the previous literature. In order to deal with the temporal dynamics of the network snapshot sequence, we propose a gated recurrent unit (GRU) module based on Ricci curvature, that is the RGRU. In the proposed framework, we used a temporal attention layer and the RGRU to evolve the neural network weight matrix to capture temporal dynamics in the network snapshot sequence. The experimental results showed that our model outperformed the baseline approaches on the baseline datasets.
APA, Harvard, Vancouver, ISO, and other styles
29

Matveeva, N. "Comparative analysis using neural networks programming on Java for of signal recognition." System technologies 1, no. 138 (March 30, 2022): 185–91. http://dx.doi.org/10.34185/1562-9945-1-138-2022-18.

Full text
Abstract:
The results of the study of a multilayer persertron and a radial-basic neural network for signal recognition are presented. Neural networks are implemented in Java in the environment NetBeans. The optimal number of neurons in the hidden layer is selected for building an effec-tive architecture of the neural network. Experiments were performed to analyze MSE values, Euclidean distance and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
30

LÓPEZ-RUBIO, EZEQUIEL, ESTEBAN JOSÉ PALOMO, and ENRIQUE DOMÍNGUEZ. "BREGMAN DIVERGENCES FOR GROWING HIERARCHICAL SELF-ORGANIZING NETWORKS." International Journal of Neural Systems 24, no. 04 (April 3, 2014): 1450016. http://dx.doi.org/10.1142/s0129065714500166.

Full text
Abstract:
Growing hierarchical self-organizing models are characterized by the flexibility of their structure, which can easily accomodate for complex input datasets. However, most proposals use the Euclidean distance as the only error measure. Here we propose a way to introduce Bregman divergences in these models, which is based on stochastic approximation principles, so that more general distortion measures can be employed. A procedure is derived to compare the performance of networks using different divergences. Moreover, a probabilistic interpretation of the model is provided, which enables its use as a Bayesian classifier. Experimental results are presented for classification and data visualization applications, which show the advantages of these divergences with respect to the classical Euclidean distance.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhou, Renjie, Xiao Wang, Jingjing Yang, Wei Zhang, and Sanyuan Zhang. "Characterizing Network Anomaly Traffic with Euclidean Distance-Based Multiscale Fuzzy Entropy." Security and Communication Networks 2021 (June 16, 2021): 1–9. http://dx.doi.org/10.1155/2021/5560185.

Full text
Abstract:
The prosperity of mobile networks and social networks brings revolutionary conveniences to our daily lives. However, due to the complexity and fragility of the network environment, network attacks are becoming more and more serious. Characterization of network traffic is commonly used to model and detect network anomalies and finally to raise the cybersecurity awareness capability of network administrators. As a tool to characterize system running status, entropy-based time-series complexity measurement methods such as Multiscale Entropy (MSE), Composite Multiscale Entropy (CMSE), and Fuzzy Approximate Entropy (FuzzyEn) have been widely used in anomaly detection. However, the existing methods calculate the distance between vectors solely using the two most different elements of the two vectors. Furthermore, the similarity of vectors is calculated using the Heaviside function, which has a problem of bouncing between 0 and 1. The Euclidean Distance-Based Multiscale Fuzzy Entropy (EDM-Fuzzy) algorithm was proposed to avoid the two disadvantages and to measure entropy values of system signals more precisely, accurately, and stably. In this paper, the EDM-Fuzzy is applied to analyze the characteristics of abnormal network traffic such as botnet network traffic and Distributed Denial of Service (DDoS) attack traffic. The experimental analysis shows that the EDM-Fuzzy entropy technology is able to characterize the differences between normal traffic and abnormal traffic. The EDM-Fuzzy entropy characteristics of ARP traffic discovered in this paper can be used to detect various types of network traffic anomalies including botnet and DDoS attacks.
APA, Harvard, Vancouver, ISO, and other styles
32

Maaeda Mohsin Rashid. "K-means Clustering, Unsupervised Classification, K-NN, Euclidean Distance, Genetic Algorithm." Tikrit Journal of Pure Science 22, no. 9 (February 1, 2023): 113–17. http://dx.doi.org/10.25130/tjps.v22i9.884.

Full text
Abstract:
In recent days, the need to provide reliable data transmission over Internet traffics or cellular mobile systems becomes very important. Transmission Control Protocol (TCP) represents the prevailing protocol that provide reliability to data transferring in all end-to-end data stream services on the Internet and many of new networks. TCP congestion control has become the key factor manipulating the behavior and performance of the networks. TCP sender can regulates the size of the congestion window (CWND) using the congestion control mechanism and TCP dynamically adjust the window size depending on the packets acknowledgment (ACK) or by indicates the packets losses when occur. TCP congestion control includes two main phases, slow-start and congestion avoidance and these two phases even work separately, but the combination of them controls CWND and the packet injection to the network pipe. Congestion avoidance and slow-start are liberated mechanisms and using unlike objectives, but if the congestion happens, they are executed together. This article provides an efficient and reliable congestion avoidance mechanism to enhancing the TCP performance in large-bandwidth low-latency networks. The proposed mechanism also includes a facility to send multiple flows over same connection with a novel technique to estimate the number of available flows dynamically, where the all experiments to approving the proposed techniques are performed over the network simulation NS-2.
APA, Harvard, Vancouver, ISO, and other styles
33

Onuean, Athita, Hanmin Jung, and Krisana Chinnasarn. "Finding Optimal Stations Using Euclidean Distance and Adjustable Surrounding Sphere." Applied Sciences 11, no. 2 (January 18, 2021): 848. http://dx.doi.org/10.3390/app11020848.

Full text
Abstract:
Air quality monitoring network (AQMN) plays an important role in air pollution management. However, setting up an initial network in a city often lacks necessary information such as historical pollution and geographical data, which makes it challenging to establish an effective network. Meanwhile, cities with an existing one do not adequately represent spatial coverage of air pollution issues or face rapid urbanization where additional stations are needed. To resolve the two cases, we propose four methods for finding stations and constructing a network using Euclidean distance and the k-nearest neighbor algorithm, consisting of Euclidean Distance (ED), Fixed Surrounding Sphere (FSS), Euclidean Distance + Fixed Surrounding Sphere (ED + FSS), and Euclidean Distance + Adjustable Surrounding Sphere (ED + ASS). We introduce and apply a coverage percentage and weighted coverage degree for evaluating the results from our proposed methods. Our experiment result shows that ED + ASS is better than other methods for finding stations to enhance spatial coverage. In the case of setting up the initial networks, coverage percentages are improved up to 22%, 37%, and 56% compared with the existing network, and adding a station in the existing one improved up by 34%, 130%, and 39%, in Sejong, Bonn, and Bangkok cities, respectively. Our method depicts acceptable results and will be implemented as a guide for establishing a new network and can be a tool for improving spatial coverage of the existing network for future expansions in air monitoring.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Ning, Shigen Shen, Youxiang Duan, Siyu Huang, Wei Zhang, and Lizhuang Tan. "Non-Euclidean Graph-Convolution Virtual Network Embedding for Space–Air–Ground Integrated Networks." Drones 7, no. 3 (February 27, 2023): 165. http://dx.doi.org/10.3390/drones7030165.

Full text
Abstract:
For achieving seamless global coverage and real-time communications while providing intelligent applications with increased quality of service (QoS), AI-enabled space–air–ground integrated networks (SAGINs) have attracted widespread attention from all walks of life. However, high-intensity interactions pose fundamental challenges for resource orchestration and security issues. Meanwhile, virtual network embedding (VNE) is applied to the function decoupling of various physical networks due to its flexibility. Inspired by the above, for SAGINs with non-Euclidean structures, we propose a graph-convolution virtual network embedding algorithm. Specifically, based on the excellent decision-making properties of deep reinforcement learning (DRL), we design an orchestration network combined with graph convolution to calculate the embedding probability of nodes. It fuses the information of the neighborhood structure, fully fits the original characteristics of the physical network, and utilizes the specified reward mechanism to guide positive learning. Moreover, by imposing security-level constraints on physical nodes, it restricts resource access. All-around and rigorous experiments are carried out in a simulation environment. Finally, results on long-term average revenue, VNR acceptance ratio, and long-term revenue–cost ratio show that the proposed algorithm outperforms advanced baselines.
APA, Harvard, Vancouver, ISO, and other styles
35

Gutiérrez-Reina, Daniel, Vishal Sharma, Ilsun You, and Sergio Toral. "Dissimilarity Metric Based on Local Neighboring Information and Genetic Programming for Data Dissemination in Vehicular Ad Hoc Networks (VANETs)." Sensors 18, no. 7 (July 17, 2018): 2320. http://dx.doi.org/10.3390/s18072320.

Full text
Abstract:
This paper presents a novel dissimilarity metric based on local neighboring information and a genetic programming approach for efficient data dissemination in Vehicular Ad Hoc Networks (VANETs). The primary aim of the dissimilarity metric is to replace the Euclidean distance in probabilistic data dissemination schemes, which use the relative Euclidean distance among vehicles to determine the retransmission probability. The novel dissimilarity metric is obtained by applying a metaheuristic genetic programming approach, which provides a formula that maximizes the Pearson Correlation Coefficient between the novel dissimilarity metric and the Euclidean metric in several representative VANET scenarios. Findings show that the obtained dissimilarity metric correlates with the Euclidean distance up to 8.9% better than classical dissimilarity metrics. Moreover, the obtained dissimilarity metric is evaluated when used in well-known data dissemination schemes, such as p-persistence, polynomial and irresponsible algorithm. The obtained dissimilarity metric achieves significant improvements in terms of reachability in comparison with the classical dissimilarity metrics and the Euclidean metric-based schemes in the studied VANET urban scenarios.
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Ziheng, Tianyang Xu, Xiao-Jun Wu, Rui Wang, Zhiwu Huang, and Josef Kittler. "Riemannian Local Mechanism for SPD Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7104–12. http://dx.doi.org/10.1609/aaai.v37i6.25867.

Full text
Abstract:
The Symmetric Positive Definite (SPD) matrices have received wide attention for data representation in many scientific areas. Although there are many different attempts to develop effective deep architectures for data processing on the Riemannian manifold of SPD matrices, very few solutions explicitly mine the local geometrical information in deep SPD feature representations. Given the great success of local mechanisms in Euclidean methods, we argue that it is of utmost importance to ensure the preservation of local geometric information in the SPD networks. We first analyse the convolution operator commonly used for capturing local information in Euclidean deep networks from the perspective of a higher level of abstraction afforded by category theory. Based on this analysis, we define the local information in the SPD manifold and design a multi-scale submanifold block for mining local geometry. Experiments involving multiple visual tasks validate the effectiveness of our approach.
APA, Harvard, Vancouver, ISO, and other styles
37

Hyde, S. T., S. Ramsden, T. Di Matteo, and J. J. Longdell. "Ab-initio construction of some crystalline 3D Euclidean networks." Solid State Sciences 5, no. 1 (January 2003): 35–45. http://dx.doi.org/10.1016/s1293-2558(02)00079-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Cáceres, José, Delia Garijo, Antonio González, Alberto Márquez, María Luz Puertas, and Paula Ribeiro. "Shortcut sets for the locus of plane Euclidean networks." Applied Mathematics and Computation 334 (October 2018): 192–205. http://dx.doi.org/10.1016/j.amc.2018.04.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jonckheere, Edmond, Mingji Lou, Francis Bonahon, and Yuliy Baryshnikov. "Euclidean versus Hyperbolic Congestion in Idealized versus Experimental Networks." Internet Mathematics 7, no. 1 (March 14, 2011): 1–27. http://dx.doi.org/10.1080/15427951.2010.554320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hsu, D. Frank, and Xiao-Dong Hu. "On shortest two-connected Steiner networks with Euclidean distance." Networks 32, no. 2 (September 1998): 133–40. http://dx.doi.org/10.1002/(sici)1097-0037(199809)32:2<133::aid-net6>3.0.co;2-c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hu, Kai, Jiasheng Wu, Yaogen Li, Meixia Lu, Liguo Weng, and Min Xia. "FedGCN: Federated Learning-Based Graph Convolutional Networks for Non-Euclidean Spatial Data." Mathematics 10, no. 6 (March 21, 2022): 1000. http://dx.doi.org/10.3390/math10061000.

Full text
Abstract:
Federated Learning (FL) can combine multiple clients for training and keep client data local, which is a good way to protect data privacy. There are many excellent FL algorithms. However, most of these can only process data with regular structures, such as images and videos. They cannot process non-Euclidean spatial data, that is, irregular data. To address this problem, we propose a Federated Learning-Based Graph Convolutional Network (FedGCN). First, we propose a Graph Convolutional Network (GCN) as a local model of FL. Based on the classical graph convolutional neural network, TopK pooling layers and full connection layers are added to this model to improve the feature extraction ability. Furthermore, to prevent pooling layers from losing information, cross-layer fusion is used in the GCN, giving FL an excellent ability to process non-Euclidean spatial data. Second, in this paper, a federated aggregation algorithm based on an online adjustable attention mechanism is proposed. The trainable parameter ρ is introduced into the attention mechanism. The aggregation method assigns the corresponding attention coefficient to each local model, which reduces the damage caused by the inefficient local model parameters to the global model and improves the fault tolerance and accuracy of the FL algorithm. Finally, we conduct experiments on six non-Euclidean spatial datasets to verify that the proposed algorithm not only has good accuracy but also has a certain degree of generality. The proposed algorithm can also perform well in different graph neural networks.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Lin, Zheng Min Xia, Sheng Hong Li, Li Pan, and Zhi Hua Huang. "Detecting Overlapping Communities with MDS and Local Expansion FCM." Applied Mechanics and Materials 644-650 (September 2014): 3295–99. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.3295.

Full text
Abstract:
Community structure is an important feature to understand structural and functional properties in various complex networks. In this paper, we use Multidimensional Scaling (MDS) to map nodes of network into Euclidean space to keep the distance information of nodes, and then we use topology feature of communities to propose the local expansion strategy to detect initial seeds for FCM. Finally, the FCM are used to uncover overlapping communities in the complex networks. The test results in real-world and artificial networks show that the proposed algorithm is efficient and robust in uncovering overlapping community structure.
APA, Harvard, Vancouver, ISO, and other styles
43

Hirsch, C., D. Neuhäuser, C. Gloaguen, and V. Schmidt. "First Passage Percolation on Random Geometric Graphs and an Application to Shortest-Path Trees." Advances in Applied Probability 47, no. 2 (June 2015): 328–54. http://dx.doi.org/10.1239/aap/1435236978.

Full text
Abstract:
We consider Euclidean first passage percolation on a large family of connected random geometric graphs in the d-dimensional Euclidean space encompassing various well-known models from stochastic geometry. In particular, we establish a strong linear growth property for shortest-path lengths on random geometric graphs which are generated by point processes. We consider the event that the growth of shortest-path lengths between two (end) points of the path does not admit a linear upper bound. Our linear growth property implies that the probability of this event tends to zero sub-exponentially fast if the direct (Euclidean) distance between the endpoints tends to infinity. Besides, for a wide class of stationary and isotropic random geometric graphs, our linear growth property implies a shape theorem for the Euclidean first passage model defined by such random geometric graphs. Finally, this shape theorem can be used to investigate a problem which is considered in structural analysis of fixed-access telecommunication networks, where we determine the limiting distribution of the length of the longest branch in the shortest-path tree extracted from a typical segment system if the intensity of network stations converges to 0.
APA, Harvard, Vancouver, ISO, and other styles
44

Hirsch, C., D. Neuhäuser, C. Gloaguen, and V. Schmidt. "First Passage Percolation on Random Geometric Graphs and an Application to Shortest-Path Trees." Advances in Applied Probability 47, no. 02 (June 2015): 328–54. http://dx.doi.org/10.1017/s0001867800007886.

Full text
Abstract:
We consider Euclidean first passage percolation on a large family of connected random geometric graphs in the d-dimensional Euclidean space encompassing various well-known models from stochastic geometry. In particular, we establish a strong linear growth property for shortest-path lengths on random geometric graphs which are generated by point processes. We consider the event that the growth of shortest-path lengths between two (end) points of the path does not admit a linear upper bound. Our linear growth property implies that the probability of this event tends to zero sub-exponentially fast if the direct (Euclidean) distance between the endpoints tends to infinity. Besides, for a wide class of stationary and isotropic random geometric graphs, our linear growth property implies a shape theorem for the Euclidean first passage model defined by such random geometric graphs. Finally, this shape theorem can be used to investigate a problem which is considered in structural analysis of fixed-access telecommunication networks, where we determine the limiting distribution of the length of the longest branch in the shortest-path tree extracted from a typical segment system if the intensity of network stations converges to 0.
APA, Harvard, Vancouver, ISO, and other styles
45

Pareja, Aldo, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao Schardl, and Charles Leiserson. "EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5363–70. http://dx.doi.org/10.1609/aaai.v34i04.5984.

Full text
Abstract:
Graph representation learning resurges as a trending research subject owing to the widespread use of deep learning for Euclidean data, which inspire various creative designs of neural networks in the non-Euclidean domain, particularly graphs. With the success of these graph neural networks (GNN) in the static setting, we approach further practical scenarios where the graph dynamically evolves. Existing approaches typically resort to node embeddings and use a recurrent neural network (RNN, broadly speaking) to regulate the embeddings and learn the temporal dynamics. These methods require the knowledge of a node in the full time span (including both training and testing) and are less applicable to the frequent change of the node set. In some extreme scenarios, the node sets at different time steps may completely differ. To resolve this challenge, we propose EvolveGCN, which adapts the graph convolutional network (GCN) model along the temporal dimension without resorting to node embeddings. The proposed approach captures the dynamism of the graph sequence through using an RNN to evolve the GCN parameters. Two architectures are considered for the parameter evolution. We evaluate the proposed approach on tasks including link prediction, edge classification, and node classification. The experimental results indicate a generally higher performance of EvolveGCN compared with related approaches. The code is available at https://github.com/IBM/EvolveGCN.
APA, Harvard, Vancouver, ISO, and other styles
46

Nathiya, N., and C. Amulya Smyrna. "Infinite Schrödinger networks." Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki 31, no. 4 (December 2021): 640–50. http://dx.doi.org/10.35634/vm210408.

Full text
Abstract:
Finite-difference models of partial differential equations such as Laplace or Poisson equations lead to a finite network. A discretized equation on an unbounded plane or space results in an infinite network. In an infinite network, Schrödinger operator (perturbed Laplace operator, $q$-Laplace) is defined to develop a discrete potential theory which has a model in the Schrödinger equation in the Euclidean spaces. The relation between Laplace operator $\Delta$-theory and the $\Delta_q$-theory is investigated. In the $\Delta_q$-theory the Poisson equation is solved if the network is a tree and a canonical representation for non-negative $q$-superharmonic functions is obtained in general case.
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Lingxiao, Shuzhe Shi, and Kai Zhou. "Unsupervised learning spectral functions with neural networks." Journal of Physics: Conference Series 2586, no. 1 (September 1, 2023): 012158. http://dx.doi.org/10.1088/1742-6596/2586/1/012158.

Full text
Abstract:
Abstract Reconstructing spectral functions from Euclidean Green’s functions is an ill-posed inverse problem that is crucial for understanding the properties of many-body systems. In this proceeding, we propose an automatic differentiation (AD) framework utilizing neural network representations for spectral reconstruction from propagator observables. We construct spectral functions using neural networks and optimize the network parameters unsupervisedly based on the reconstruction error of the propagator. Compared to the maximum entropy method, the AD framework demonstrates better performance in situations with high noise levels. It is noteworthy that neural network representations provide non-local regularization, which has the potential to significantly improve the solution of inverse problems.
APA, Harvard, Vancouver, ISO, and other styles
48

Cho, Kyungjin, Jihun Shin, and Eunjin Oh. "Approximate Distance Oracle for Fault-Tolerant Geometric Spanners." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (March 24, 2024): 20087–95. http://dx.doi.org/10.1609/aaai.v38i18.29987.

Full text
Abstract:
In this paper, we present approximate distance and shortest-path oracles for fault-tolerant Euclidean spanners motivated by the routing problem in real-world road networks. A fault-tolerant Euclidean spanner for a set of points in Euclidean space is a graph in which, despite the deletion of small number of any points, the distance between any two points in the damaged graph is an approximation of their Euclidean distance. Given a fault-tolerant Euclidean spanner and a small approximation factor, our data structure allows us to compute an approximate distance between two points in the damaged spanner in constant time when a query involves any two points and a small set of failed points. Additionally, by incorporating additional data structures, we can return a path itself in time almost linear in the length of the returned path. Both data structures require near-linear space.
APA, Harvard, Vancouver, ISO, and other styles
49

ANDRÁS, PÉTER. "KERNEL-KOHONEN NETWORKS." International Journal of Neural Systems 12, no. 02 (April 2002): 117–35. http://dx.doi.org/10.1142/s0129065702001084.

Full text
Abstract:
We investigate the combination of the Kohonen networks with the kernel methods in the context of classification. We use the idea of kernel functions to handle products of vectors of arbitrary dimension. We indicate how to build Kohonen networks with robust classification performance by transformation of the original data vectors into a possibly infinite dimensional space. The resulting Kohonen networks preserve a non-Euclidean neighborhood structure of the input space that fits the properties of the data. We show how to optimize the transformation of the data vectors in order to obtain higher classification performance. We compare the kernel-Kohonen networks with the regular Kohonen networks in the context of a classification task.
APA, Harvard, Vancouver, ISO, and other styles
50

DE LOS RIOS, P., and T. PETERMANN. "EXISTENCE, COST AND ROBUSTNESS OF SPATIAL SMALL-WORLD NETWORKS." International Journal of Bifurcation and Chaos 17, no. 07 (July 2007): 2331–42. http://dx.doi.org/10.1142/s0218127407018427.

Full text
Abstract:
Small-world networks embedded in Euclidean space represent useful cartoon models for a number of real systems such as electronic circuits, communication systems, the large-scale brain architecture and others. Since the small-world behavior relies on the presence of long-range connections that are likely to have a cost which is a growing function of the length, we explore whether it is possible to choose suitable probability distributions for the shortcut lengths so as to preserve the small-world feature and, at the same time, to minimize the network cost. The flow distribution for such networks, and their robustness, are also investigated.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography