Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Network data representation.

Статті в журналах з теми "Network data representation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Network data representation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

R.Tamilarasu and G. Soundarya Devi. "Improvising Connection In 5g By Means Of Particle Swarm Optimization Techniques." South Asian Journal of Engineering and Technology 14, no. 2 (April 30, 2024): 1–6. http://dx.doi.org/10.26524/sajet.2023.14.2.

Повний текст джерела
Анотація:
Data and network embedding techniques are essential for representing complex data structures in a lower-dimensional space, aiding in tasks like data inference and network reconstruction by assigning nodes to concise representations while preserving the network's structure. The integration of Particle Swarm Optimization (PSO) with matrix factorization methods optimizes mapping functions and parameters during the embedding process, enhancing representation learning efficiency. Combining PSO with techniques like Deep Walk highlights its adaptability as a robust optimization tool for extracting meaningful representations from intricate data and network architectures. This collaboration significantly advances network inference and reconstruction methodologies by streamlining the representation of complex data structures. Leveraging PSO's optimization capabilities enables researchers to extract high-quality information from data networks, improving the accuracy of data inference outcomes. The amalgamation of PSO with data and network embedding methodologies not only enhances the quality of extracted information but also drives innovations in network analysis and related fields. This integration streamlines representation learning and advances network analysis methodologies, enabling more precise data inference and reconstruction. The adaptability and efficiency of PSO in extracting meaningful representations from complex data structures underscore its significance in advancing network inference and reconstruction techniques, contributing to the evolution of network analysis methodologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ye, Zhonglin, Haixing Zhao, Ke Zhang, Yu Zhu, and Zhaoyang Wang. "An Optimized Network Representation Learning Algorithm Using Multi-Relational Data." Mathematics 7, no. 5 (May 21, 2019): 460. http://dx.doi.org/10.3390/math7050460.

Повний текст джерела
Анотація:
Representation learning aims to encode the relationships of research objects into low-dimensional, compressible, and distributed representation vectors. The purpose of network representation learning is to learn the structural relationships between network vertices. Knowledge representation learning is oriented to model the entities and relationships in knowledge bases. In this paper, we first introduce the idea of knowledge representation learning into network representation learning, namely, we propose a new approach to model the vertex triplet relationships based on DeepWalk without TransE. Consequently, we propose an optimized network representation learning algorithm using multi-relational data, MRNR, which introduces the multi-relational data between vertices into the procedures of network representation learning. Importantly, we adopted a kind of higher order transformation strategy to optimize the learnt network representation vectors. The purpose of MRNR is that multi-relational data (triplets) can effectively guide and constrain the procedures of network representation learning. The experimental results demonstrate that the proposed MRNR can learn the discriminative network representations, which show better performance on network classification, visualization, and case study tasks compared to the proposed baseline algorithms in this paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Armenta, Marco, and Pierre-Marc Jodoin. "The Representation Theory of Neural Networks." Mathematics 9, no. 24 (December 13, 2021): 3216. http://dx.doi.org/10.3390/math9243216.

Повний текст джерела
Анотація:
In this work, we show that neural networks can be represented via the mathematical theory of quiver representations. More specifically, we prove that a neural network is a quiver representation with activation functions, a mathematical object that we represent using a network quiver. Furthermore, we show that network quivers gently adapt to common neural network concepts such as fully connected layers, convolution operations, residual connections, batch normalization, pooling operations and even randomly wired neural networks. We show that this mathematical representation is by no means an approximation of what neural networks are as it exactly matches reality. This interpretation is algebraic and can be studied with algebraic methods. We also provide a quiver representation model to understand how a neural network creates representations from the data. We show that a neural network saves the data as quiver representations, and maps it to a geometrical space called the moduli space, which is given in terms of the underlying oriented graph of the network, i.e., its quiver. This results as a consequence of our defined objects and of understanding how the neural network computes a prediction in a combinatorial and algebraic way. Overall, representing neural networks through the quiver representation theory leads to 9 consequences and 4 inquiries for future research that we believe are of great interest to better understand what neural networks are and how they work.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Aristizábal Q, Luz Angela, and Nicolás Toro G. "Multilayer Representation and Multiscale Analysis on Data Networks." International journal of Computer Networks & Communications 13, no. 3 (May 31, 2021): 41–55. http://dx.doi.org/10.5121/ijcnc.2021.13303.

Повний текст джерела
Анотація:
The constant increase in the complexity of data networks motivates the search for strategies that make it possible to reduce current monitoring times. This paper shows the way in which multilayer network representation and the application of multiscale analysis techniques, as applied to software-defined networks, allows for the visualization of anomalies from "coarse views of the network topology". This implies the analysis of fewer data, and consequently the reduction of the time that a process takes to monitor the network. The fact that software-defined networks allow for the obtention of a global view of network behavior facilitates detail recovery from affected zones detected in monitoring processes. The method is evaluated by calculating the reduction factor of nodes, checked during anomaly detection, with respect to the total number of nodes in the network.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Nguyễn, Tuấn, Nguyen Hai Hao, Dang Le Dinh Trang, Nguyen Van Tuan, and Cao Van Loi. "Robust anomaly detection methods for contamination network data." Journal of Military Science and Technology, no. 79 (May 19, 2022): 41–51. http://dx.doi.org/10.54939/1859-1043.j.mst.79.2022.41-51.

Повний текст джерела
Анотація:
Recently, latent representation models, such as Shrink Autoencoder (SAE), have been demonstrated as robust feature representations for one-class learning-based network anomaly detection. In these studies, benchmark network datasets that are processed in laboratory environments to make them completely clean are often employed for constructing and evaluating such models. In real-world scenarios, however, we can not guarantee 100% to collect pure normal data for constructing latent representation models. Therefore, this work aims to investigate the characteristics of the latent representation of SAE in learning normal data under some contamination scenarios. This attempts to find out wherever the latent feature space of SAE is robust to contamination or not, and which contamination scenarios it prefers. We design a set of experiments using normal data contaminated with different anomaly types and different proportions of anomalies for the investigation. Other latent representation methods such as Denoising Autoencoder (DAE) and Principal component analysis (PCA) are also used for comparison with the performance of SAE. The experimental results on four CTU13 scenarios show that the latent representation of SAE often out-performs and are less sensitive to contamination than the others.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Du, Xin, Yulong Pei, Wouter Duivesteijn, and Mykola Pechenizkiy. "Fairness in Network Representation by Latent Structural Heterogeneity in Observational Data." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3809–16. http://dx.doi.org/10.1609/aaai.v34i04.5792.

Повний текст джерела
Анотація:
While recent advances in machine learning put many focuses on fairness of algorithmic decision making, topics about fairness of representation, especially fairness of network representation, are still underexplored. Network representation learning learns a function mapping nodes to low-dimensional vectors. Structural properties, e.g. communities and roles, are preserved in the latent embedding space. In this paper, we argue that latent structural heterogeneity in the observational data could bias the classical network representation model. The unknown heterogeneous distribution across subgroups raises new challenges for fairness in machine learning. Pre-defined groups with sensitive attributes cannot properly tackle the potential unfairness of network representation. We propose a method which can automatically discover subgroups which are unfairly treated by the network representation model. The fairness measure we propose can evaluate complex targets with multi-degree interactions. We conduct randomly controlled experiments on synthetic datasets and verify our methods on real-world datasets. Both quantitative and quantitative results show that our method is effective to recover the fairness of network representations. Our research draws insight on how structural heterogeneity across subgroups restricted by attributes would affect the fairness of network representation learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Dongming Chen, Dongming Chen, Mingshuo Nie Dongming Chen, Jiarui Yan Mingshuo Nie, Jiangnan Meng Jiarui Yan, and Dongqi Wang Jiangnan Meng. "Network Representation Learning Algorithm Based on Community Folding." 網際網路技術學刊 23, no. 2 (March 2022): 415–23. http://dx.doi.org/10.53106/160792642022032302020.

Повний текст джерела
Анотація:
<p>Network representation learning is a machine learning method that maps network topology and node information into low-dimensional vector space, which can reduce the temporal and spatial complexity of downstream network data mining such as node classification and graph clustering. This paper addresses the problem that neighborhood information-based network representation learning algorithm ignores the global topological information of the network. We propose the Network Representation Learning Algorithm Based on Community Folding (CF-NRL) considering the influence of community structure on the global topology of the network. Each community of the target network is regarded as a folding unit, the same network representation learning algorithm is used to learn the vector representation of the nodes on the folding network and the target network, then the vector representations are spliced correspondingly to obtain the final vector representation of the node. Experimental results show the excellent performance of the proposed algorithm.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhang, Xiaoxian, Jianpei Zhang, and Jing Yang. "Large-scale dynamic social data representation for structure feature learning." Journal of Intelligent & Fuzzy Systems 39, no. 4 (October 21, 2020): 5253–62. http://dx.doi.org/10.3233/jifs-189010.

Повний текст джерела
Анотація:
The problems caused by network dimension disasters and computational complexity have become an important issue to be solved in the field of social network research. The existing methods for network feature learning are mostly based on static and small-scale assumptions, and there is no modified learning for the unique attributes of social networks. Therefore, existing learning methods cannot adapt to the dynamic and large-scale of current social networks. Even super large scale and other features. This paper mainly studies the feature representation learning of large-scale dynamic social network structure. In this paper, the positive and negative damping sampling of network nodes in different classes is carried out, and the dynamic feature learning method for newly added nodes is constructed, which makes the model feasible for the extraction of structural features of large-scale social networks in the process of dynamic change. The obtained node feature representation has better dynamic robustness. By selecting the real datasets of three large-scale dynamic social networks and the experiments of dynamic link prediction in social networks, it is found that DNPS has achieved a large performance improvement over the benchmark model in terms of prediction accuracy and time efficiency. When the α value is around 0.7, the model effect is optimal.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kapoor, Maya, Michael Napolitano, Jonathan Quance, Thomas Moyer, and Siddharth Krishnan. "Detecting VoIP Data Streams: Approaches Using Hidden Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15519–27. http://dx.doi.org/10.1609/aaai.v37i13.26840.

Повний текст джерела
Анотація:
The use of voice-over-IP technology has rapidly expanded over the past several years, and has thus become a significant portion of traffic in the real, complex network environment. Deep packet inspection and middlebox technologies need to analyze call flows in order to perform network management, load-balancing, content monitoring, forensic analysis, and intelligence gathering. Because the session setup and management data can be sent on different ports or out of sync with VoIP call data over the Real-time Transport Protocol (RTP) with low latency, inspection software may miss calls or parts of calls. To solve this problem, we engineered two different deep learning models based on hidden representation learning. MAPLE, a matrix-based encoder which transforms packets into an image representation, uses convolutional neural networks to determine RTP packets from data flow. DATE is a density-analysis based tensor encoder which transforms packet data into a three-dimensional point cloud representation. We then perform density-based clustering over the point clouds as latent representations of the data, and classify packets as RTP or non-RTP based on their statistical clustering features. In this research, we show that these tools may allow a data collection and analysis pipeline to begin detecting and buffering RTP streams for later session association, solving the initial drop problem. MAPLE achieves over ninety-nine percent accuracy in RTP/non-RTP detection. The results of our experiments show that both models can not only classify RTP versus non-RTP packet streams, but could extend to other network traffic classification problems in real deployments of network analysis pipelines.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Giannarakis, Nick, Alexandra Silva, and David Walker. "ProbNV: probabilistic verification of network control planes." Proceedings of the ACM on Programming Languages 5, ICFP (August 22, 2021): 1–30. http://dx.doi.org/10.1145/3473595.

Повний текст джерела
Анотація:
ProbNV is a new framework for probabilistic network control plane verification that strikes a balance between generality and scalability. ProbNV is general enough to encode a wide range of features from the most common protocols (eBGP and OSPF) and yet scalable enough to handle challenging properties, such as probabilistic all-failures analysis of medium-sized networks with 100-200 devices. When there are a small, bounded number of failures, networks with up to 500 devices may be verified in seconds. ProbNV operates by translating raw CISCO configurations into a probabilistic and functional programming language designed for network verification. This language comes equipped with a novel type system that characterizes the sort of representation to be used for each data structure: concrete for the usual representation of values; symbolic for a BDD-based representation of sets of values; and multi-value for an MTBDD-based representation of values that depend upon symbolics. Careful use of these varying representations speeds execution of symbolic simulation of network models. The MTBDD-based representations are also used to calculate probabilistic properties of network models once symbolic simulation is complete. We implement the language and evaluate its performance on benchmarks constructed from real network topologies and synthesized routing policies.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Hyvönen, Jörkki, Jari Saramäki, and Kimmo Kaski. "Efficient data structures for sparse network representation." International Journal of Computer Mathematics 85, no. 8 (August 2008): 1219–33. http://dx.doi.org/10.1080/00207160701753629.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Wong, S. V., and A. M. S. Hamouda. "Machinability data representation with artificial neural network." Journal of Materials Processing Technology 138, no. 1-3 (July 2003): 538–44. http://dx.doi.org/10.1016/s0924-0136(03)00143-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Buckles, Bill P., Frederick E. Petry, and Jayadev Pillai. "Network data models for representation of uncertainty." Fuzzy Sets and Systems 38, no. 2 (November 1990): 171–90. http://dx.doi.org/10.1016/0165-0114(90)90148-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zhan, Huixin, and Victor S. Sheng. "Privacy-Preserving Representation Learning for Text-Attributed Networks with Simplicial Complexes." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 16143–44. http://dx.doi.org/10.1609/aaai.v37i13.26932.

Повний текст джерела
Анотація:
Although recent network representation learning (NRL) works in text-attributed networks demonstrated superior performance for various graph inference tasks, learning network representations could always raise privacy concerns when nodes represent people or human-related variables. Moreover, standard NRLs that leverage structural information from a graph proceed by first encoding pairwise relationships into learned representations and then analysing its properties. This approach is fundamentally misaligned with problems where the relationships involve multiple points, and topological structure must be encoded beyond pairwise interactions. Fortunately, the machinery of topological data analysis (TDA) and, in particular, simplicial neural networks (SNNs) offer a mathematically rigorous framework to evaluate not only higher-order interactions, but also global invariant features of the observed graph to systematically learn topological structures. It is critical to investigate if the representation outputs from SNNs are more vulnerable compared to regular representation outputs from graph neural networks (GNNs) via pairwise interactions. In my dissertation, I will first study learning the representations with text attributes for simplicial complexes (RT4SC) via SNNs. Then, I will conduct research on two potential attacks on the representation outputs from SNNs: (1) membership inference attack, which infers whether a certain node of a graph is inside the training data of the GNN model; and (2) graph reconstruction attacks, which infer the confidential edges of a text-attributed network. Finally, I will study a privacy-preserving deterministic differentially private alternating direction method of multiplier to learn secure representation outputs from SNNs that capture multi-scale relationships and facilitate the passage from local structure to global invariant features on text-attributed networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Zhang, Hu, Jingjing Zhou, Ru Li, and Yue Fan. "Network representation learning method embedding linear and nonlinear network structures." Semantic Web 13, no. 3 (April 6, 2022): 511–26. http://dx.doi.org/10.3233/sw-212968.

Повний текст джерела
Анотація:
With the rapid development of neural networks, much attention has been focused on network embedding for complex network data, which aims to learn low-dimensional embedding of nodes in the network and how to effectively apply learned network representations to various graph-based analytical tasks. Two typical models exist namely the shallow random walk network representation method and deep learning models such as graph convolution networks (GCNs). The former one can be used to capture the linear structure of the network using depth-first search (DFS) and width-first search (BFS), whereas Hierarchical GCN (HGCN) is an unsupervised graph embedding that can be used to describe the global nonlinear structure of the network via aggregating node information. However, the two existing kinds of models cannot simultaneously capture the nonlinear and linear structure information of nodes. Thus, the nodal characteristics of nonlinear and linear structures are explored in this paper, and an unsupervised representation method based on HGCN that joins learning of shallow and deep models is proposed. Experiments on node classification and dimension reduction visualization are carried out on citation, language, and traffic networks. The results show that, compared with the existing shallow network representation model and deep network model, the proposed model achieves better performances in terms of micro-F1, macro-F1 and accuracy scores.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Vernon, Matthew C., and Matt J. Keeling. "Representing the UK's cattle herd as static and dynamic networks." Proceedings of the Royal Society B: Biological Sciences 276, no. 1656 (October 14, 2008): 469–76. http://dx.doi.org/10.1098/rspb.2008.1009.

Повний текст джерела
Анотація:
Network models are increasingly being used to understand the spread of diseases through sparsely connected populations, with particular interest in the impact of animal movements upon the dynamics of infectious diseases. Detailed data collected by the UK government on the movement of cattle may be represented as a network, where animal holdings are nodes, and an edge is drawn between nodes where a movement of animals has occurred. These network representations may vary from a simple static representation, to a more complex, fully dynamic one where daily movements are explicitly captured. Using stochastic disease simulations, a wide range of network representations of the UK cattle herd are compared. We find that the simpler static network representations are often deficient when compared with a fully dynamic representation, and should therefore be used only with caution in epidemiological modelling. In particular, due to temporal structures within the dynamic network, static networks consistently fail to capture the predicted epidemic behaviour associated with dynamic networks even when parameterized to match early growth rates.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Iddianozie, Chidubem, and Gavin McArdle. "Towards Robust Representations of Spatial Networks Using Graph Neural Networks." Applied Sciences 11, no. 15 (July 27, 2021): 6918. http://dx.doi.org/10.3390/app11156918.

Повний текст джерела
Анотація:
The effectiveness of a machine learning model is impacted by the data representation used. Consequently, it is crucial to investigate robust representations for efficient machine learning methods. In this paper, we explore the link between data representations and model performance for inference tasks on spatial networks. We argue that representations which explicitly encode the relations between spatial entities would improve model performance. Specifically, we consider homogeneous and heterogeneous representations of spatial networks. We recognise that the expressive nature of the heterogeneous representation may benefit spatial networks and could improve model performance on certain tasks. Thus, we carry out an empirical study using Graph Neural Network models for two inference tasks on spatial networks. Our results demonstrate that heterogeneous representations improves model performance for down-stream inference tasks on spatial networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Hu, Hao, Mengya Gao, and Mingsheng Wu. "Relieving the Incompatibility of Network Representation and Classification for Long-Tailed Data Distribution." Computational Intelligence and Neuroscience 2021 (December 27, 2021): 1–10. http://dx.doi.org/10.1155/2021/6702625.

Повний текст джерела
Анотація:
In the real-world scenario, data often have a long-tailed distribution and training deep neural networks on such an imbalanced dataset has become a great challenge. The main problem caused by a long-tailed data distribution is that common classes will dominate the training results and achieve a very low accuracy on the rare classes. Recent work focuses on improving the network representation ability to overcome the long-tailed problem, while it always ignores adapting the network classifier to a long-tailed case, which will cause the “incompatibility” problem of network representation and network classifier. In this paper, we use knowledge distillation to solve the long-tailed data distribution problem and fully optimize the network representation and classifier simultaneously. We propose multiexperts knowledge distillation with class-balanced sampling to jointly learn high-quality network representation and classifier. Also, a channel activation-based knowledge distillation method is also proposed to improve the performance further. State-of-the-art performance on several large-scale long-tailed classification datasets shows the superior generalization of our method.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Xu, Jian, Thanuka L. Wickramarathne, and Nitesh V. Chawla. "Representing higher-order dependencies in networks." Science Advances 2, no. 5 (May 2016): e1600028. http://dx.doi.org/10.1126/sciadv.1600028.

Повний текст джерела
Анотація:
To ensure the correctness of network analysis methods, the network (as the input) has to be a sufficiently accurate representation of the underlying data. However, when representing sequential data from complex systems, such as global shipping traffic or Web clickstream traffic as networks, conventional network representations that implicitly assume the Markov property (first-order dependency) can quickly become limiting. This assumption holds that, when movements are simulated on the network, the next movement depends only on the current node, discounting the fact that the movement may depend on several previous steps. However, we show that data derived from many complex systems can show up to fifth-order dependencies. In these cases, the oversimplifying assumption of the first-order network representation can lead to inaccurate network analysis results. To address this problem, we propose the higher-order network (HON) representation that can discover and embed variable orders of dependencies in a network representation. Through a comprehensive empirical evaluation and analysis, we establish several desirable characteristics of HON, including accuracy, scalability, and direct compatibility with the existing suite of network analysis methods. We illustrate how HON can be applied to a broad variety of tasks, such as random walking, clustering, and ranking, and we demonstrate that, by using it as input, HON yields more accurate results without any modification to these tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Zhang, Yixin, Lizhen Cui, Wei He, Xudong Lu, and Shipeng Wang. "Behavioral data assists decisions: exploring the mental representation of digital-self." International Journal of Crowd Science 5, no. 2 (July 26, 2021): 185–203. http://dx.doi.org/10.1108/ijcs-03-2021-0011.

Повний текст джерела
Анотація:
Purpose The behavioral decision-making of digital-self is one of the important research contents of the network of crowd intelligence. The factors and mechanisms that affect decision-making have attracted the attention of many researchers. Among the factors that influence decision-making, the mind of digital-self plays an important role. Exploring the influence mechanism of digital-selfs’ mind on decision-making is helpful to understand the behaviors of the crowd intelligence network and improve the transaction efficiency in the network of CrowdIntell. Design/methodology/approach In this paper, the authors use behavioral pattern perception layer, multi-aspect perception layer and memory network enhancement layer to adaptively explore the mind of a digital-self and generate the mental representation of a digital-self from three aspects including external behavior, multi-aspect factors of the mind and memory units. The authors use the mental representations to assist behavioral decision-making. Findings The evaluation in real-world open data sets shows that the proposed method can model the mind and verify the influence of the mind on the behavioral decisions, and its performance is better than the universal baseline methods for modeling user interest. Originality/value In general, the authors use the behaviors of the digital-self to mine and explore its mind, which is used to assist the digital-self to make decisions and promote the transaction in the network of CrowdIntell. This work is one of the early attempts, which uses neural networks to model the mental representation of digital-self.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Decker, Kevin T., and Brett J. Borghetti. "Hyperspectral Point Cloud Projection for the Semantic Segmentation of Multimodal Hyperspectral and Lidar Data with Point Convolution-Based Deep Fusion Neural Networks." Applied Sciences 13, no. 14 (July 14, 2023): 8210. http://dx.doi.org/10.3390/app13148210.

Повний текст джерела
Анотація:
The fusion of dissimilar data modalities in neural networks presents a significant challenge, particularly in the case of multimodal hyperspectral and lidar data. Hyperspectral data, typically represented as images with potentially hundreds of bands, provide a wealth of spectral information, while lidar data, commonly represented as point clouds with millions of unordered points in 3D space, offer structural information. The complementary nature of these data types presents a unique challenge due to their fundamentally different representations requiring distinct processing methods. In this work, we introduce an alternative hyperspectral data representation in the form of a hyperspectral point cloud (HSPC), which enables ingestion and exploitation with point cloud processing neural network methods. Additionally, we present a composite fusion-style, point convolution-based neural network architecture for the semantic segmentation of HSPC and lidar point cloud data. We investigate the effects of the proposed HSPC representation for both unimodal and multimodal networks ingesting a variety of hyperspectral and lidar data representations. Finally, we compare the performance of these networks against each other and previous approaches. This study paves the way for innovative approaches to multimodal remote sensing data fusion, unlocking new possibilities for enhanced data analysis and interpretation.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Liang, Sen, Zhi-ze Zhou, Yu-dong Guo, Xuan Gao, Ju-yong Zhang, and Hu-jun Bao. "Facial landmark disentangled network with variational autoencoder." Applied Mathematics-A Journal of Chinese Universities 37, no. 2 (June 2022): 290–305. http://dx.doi.org/10.1007/s11766-022-4589-0.

Повний текст джерела
Анотація:
AbstractLearning disentangled representation of data is a key problem in deep learning. Specifically, disentangling 2D facial landmarks into different factors (e.g., identity and expression) is widely used in the applications of face reconstruction, face reenactment and talking head et al.. However, due to the sparsity of landmarks and the lack of accurate labels for the factors, it is hard to learn the disentangled representation of landmarks. To address these problem, we propose a simple and effective model named FLD-VAE to disentangle arbitrary facial landmarks into identity and expression latent representations, which is based on a Variational Autoencoder framework. Besides, we propose three invariant loss functions in both latent and data levels to constrain the invariance of representations during training stage. Moreover, we implement an identity preservation loss to further enhance the representation ability of identity factor. To the best of our knowledge, this is the first work to end-to-end disentangle identity and expression factors simultaneously from one single facial landmark.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Craven, Mark W., and Jude W. Shavlik. "Understanding Time-Series Networks: A Case Study in Rule Extraction." International Journal of Neural Systems 08, no. 04 (August 1997): 373–84. http://dx.doi.org/10.1142/s0129065797000380.

Повний текст джерела
Анотація:
A significant limitation of neural networks is that the representation they learn are usually incomprehensible to humans. We have developed an algorithm, called TREPAN, for extracting comprehensible, symbolic representations from trained neural networks. Given a trained network, TREPAN produces a decision tree that approximates the concept represented by the network. In this article, we discuss the application of TREPAN to a neural network trained on a noisy time series task: predicting the Dollar–Mark exchange rate. We present experiments that show that TREPAN is able to extract a decision tree from this network that equals the network in terms of predictive accuracy, yet provides a comprehensible concept representation. Moreover, our experiments indicate that decision trees induced directly from the training data using conventional algorithms do not match the accuracy nor the comprehensibility of the tree extracted by TREPAN.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Bast, Hannah, and Sabine Storandt. "Frequency Data Compression for Public Transportation Network Algorithms (Extended Abstract)." Proceedings of the International Symposium on Combinatorial Search 4, no. 1 (August 20, 2021): 205–6. http://dx.doi.org/10.1609/socs.v4i1.18302.

Повний текст джерела
Анотація:
Timetable information in public transportation networks exhibit a large degree of redundancy; e.g. consider a bus going from station A to station B at 6:00, 6:15, 6:30, 6:45, 7:00, 7:15, 7:30, . . . , 20:00, the very same data can be provided by a frequency-based representation as ’6:00-20:00, every 15 minutes’ in considerably less space. Nevertheless a common graph model for routing in public transportation networks is the time-expanded representation where for each arrival/departure event a single node is created. We will introduce a frequency-based graph model which allows for a significantly more compact representation of the network, resulting also in a speed-up for station-to-station queries. Moreover we will describe a new variant of Dijkstra’s algorithm, where also the labels are frequency-based. This approach allows for accelerating profile queries in public transportation networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Xu, Liang, Yue Zhao, Xiaona Xu, Yigang Liu, and Qiang Ji. "Latent Regression Bayesian Network for Speech Representation." Electronics 12, no. 15 (August 4, 2023): 3342. http://dx.doi.org/10.3390/electronics12153342.

Повний текст джерела
Анотація:
In this paper, we present a novel approach for speech representation using latent regression Bayesian networks (LRBN) to address the issue of poor performance in low-resource language speech systems. LRBN, a lightweight unsupervised learning model, learns data distribution and high-level features, unlike computationally expensive large models, such as Wav2vec 2.0. To evaluate the effectiveness of LRBN in learning speech representations, we conducted experiments on five different low-resource languages and applied them to two downstream tasks: phoneme classification and speech recognition. Our experimental results demonstrate that LRBN outperforms prevailing speech representation methods in both tasks, highlighting its potential in the realm of speech representation learning for low-resource languages.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Naseer, Sheraz, Rao Faizan Ali, P. D. D. Dominic, and Yasir Saleem. "Learning Representations of Network Traffic Using Deep Neural Networks for Network Anomaly Detection: A Perspective towards Oil and Gas IT Infrastructures." Symmetry 12, no. 11 (November 16, 2020): 1882. http://dx.doi.org/10.3390/sym12111882.

Повний текст джерела
Анотація:
Oil and Gas organizations are dependent on their IT infrastructure, which is a small part of their industrial automation infrastructure, to function effectively. The oil and gas (O&G) organizations industrial automation infrastructure landscape is complex. To perform focused and effective studies, Industrial systems infrastructure is divided into functional levels by The Instrumentation, Systems and Automation Society (ISA) Standard ANSI/ISA-95:2005. This research focuses on the ISA-95:2005 level-4 IT infrastructure to address network anomaly detection problem for ensuring the security and reliability of Oil and Gas resource planning, process planning and operations management. Anomaly detectors try to recognize patterns of anomalous behaviors from network traffic and their performance is heavily dependent on extraction time and quality of network traffic features or representations used to train the detector. Creating efficient representations from large volumes of network traffic to develop anomaly detection models is a time and resource intensive task. In this study we propose, implement and evaluate use of Deep learning to learn effective Network data representations from raw network traffic to develop data driven anomaly detection systems. Proposed methodology provides an automated and cost effective replacement of feature extraction which is otherwise a time and resource intensive task for developing data driven anomaly detectors. The ISCX-2012 dataset is used to represent ISA-95 level-4 network traffic because the O&G network traffic at this level is not much different than normal internet traffic. We trained four representation learning models using popular deep neural network architectures to extract deep representations from ISCX 2012 traffic flows. A total of sixty anomaly detectors were trained by authors using twelve conventional Machine Learning algorithms to compare the performance of aforementioned deep representations with that of a human-engineered handcrafted network data representation. The comparisons were performed using well known model evaluation parameters. Results showed that deep representations are a promising feature in engineering replacement to develop anomaly detection models for IT infrastructure security. In our future research, we intend to investigate the effectiveness of deep representations, extracted using ISA-95:2005 Level 2-3 traffic comprising of SCADA systems, for anomaly detection in critical O&G systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Gatts, C., and A. Mariano. "Data Categorization and Neural Pattern Recognition." Microscopy and Microanalysis 3, S2 (August 1997): 933–34. http://dx.doi.org/10.1017/s1431927600011557.

Повний текст джерела
Анотація:
The natural ability of Artificial Neural Networks to perform pattern recognition tasks makes them a valuable tool in Electron Microscopy, especially when large data sets are involved. The application of Neural Pattern Recognition to HREM, although incipient, has already produced interesting results both for one dimensional spectra and 2D images.In the case of ID spectra, e.g. a set of EELS spectra acquired during a line scan, given a “vigilance parameter” (which sets the threshold for the correlation between two spectra to be high enough to consider them as similar) an ART-like network can distribute the incoming spectra into classes of similarity, defining a standard representation for each class. In order to enhance the discrimination ability of the network, the standard representations are orthonormalized, allowing for subtle differences betwen spectra and peak overlapping to be resolved. The projection of the incoming vectors onto the basis vectors thus formed gives rise to a profile of the data set.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Altuntas, Volkan. "NodeVector: A Novel Network Node Vectorization with Graph Analysis and Deep Learning." Applied Sciences 14, no. 2 (January 16, 2024): 775. http://dx.doi.org/10.3390/app14020775.

Повний текст джерела
Анотація:
Network node embedding captures structural and relational information of nodes in the network and allows for us to use machine learning algorithms for various prediction tasks on network data that have an inherently complex and disordered structure. Network node embedding should preserve as much information as possible about important network properties where information is stored, such as network structure and node properties, while representing nodes as numerical vectors in a lower-dimensional space than the original higher dimensional space. Superior node embedding algorithms are a powerful tool for machine learning with effective and efficient node representation. Recent research in representation learning has led to significant advances in automating features through unsupervised learning, inspired by advances in natural language processing. Here, we seek to improve the representation quality of node embeddings with a new node vectorization technique that uses network analysis to overcome network-based information loss. In this study, we introduce the NodeVector algorithm, which combines network analysis and neural networks to transfer information from the target network to node embedding. As a proof of concept, our experiments performed on different categories of network datasets showed that our method achieves better results than its competitors for target networks. This is the first study to produce node representation by unsupervised learning using the combination of network analysis and neural networks to consider network data structure. Based on experimental results, the use of network analysis, complex initial node representation, balanced negative sampling, and neural networks has a positive effect on the representation quality of network node embedding.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Zhang, Ye, Yanqi Gao, Yupeng Zhou, Jianan Wang, and Minghao Yin. "MRMLREC: A Two-Stage Approach for Addressing Data Sparsity in MOOC Video Recommendation (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23709–11. http://dx.doi.org/10.1609/aaai.v38i21.30536.

Повний текст джерела
Анотація:
With the abundance of learning resources available on massive open online courses (MOOCs) platforms, the issue of interactive data sparsity has emerged as a significant challenge.This paper introduces MRMLREC, an efficient MOOC video recommendation which consists of two main stages: multi-relational representation and multi-level recommendation, aiming to solve the problem of data sparsity. In the multi-relational representation stage, MRMLREC adopts a tripartite approach, constructing relational graphs based on temporal sequences, courses-videos relation, and knowledge concepts-video relation. These graphs are processed by a Graph Convolution Network (GCN) and two variant Graph Attention Networks (GAT) to derive representations. A variant of the Long Short-Term Memory Network (LSTM) then integrates these multi-dimensional data to enhance the overall representation. The multi-level recommendation stage introduces three prediction tasks at varying levels—courses, knowledge concepts, and videos—to mitigate data sparsity and improve the interpretability of video recommendations. Beam search (BS) is employed to identify top-β items at each level, refining the subsequent level's search space and enhancing recommendation efficiency. Additionally, an optional layer offers both personalization and diversification modes, ensuring variety in recommended videos and maintaining learner engagement. Comprehensive experiments demonstrate the effectiveness of MRMLREC on two real-world instances from Xuetang X.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Milano, Marianna, Giuseppe Agapito, and Mario Cannataro. "Challenges and Limitations of Biological Network Analysis." BioTech 11, no. 3 (July 7, 2022): 24. http://dx.doi.org/10.3390/biotech11030024.

Повний текст джерела
Анотація:
High-Throughput technologies are producing an increasing volume of data that needs large amounts of data storage, effective data models and efficient, possibly parallel analysis algorithms. Pathway and interactomics data are represented as graphs and add a new dimension of analysis, allowing, among other features, graph-based comparison of organisms’ properties. For instance, in biological pathway representation, the nodes can represent proteins, RNA and fat molecules, while the edges represent the interaction between molecules. Otherwise, biological networks such as Protein–Protein Interaction (PPI) Networks, represent the biochemical interactions among proteins by using nodes that model the proteins from a given organism, and edges that model the protein–protein interactions, whereas pathway networks enable the representation of biochemical-reaction cascades that happen within the cells or tissues. In this paper, we discuss the main models for standard representation of pathways and PPI networks, the data models for the representation and exchange of pathway and protein interaction data, the main databases in which they are stored and the alignment algorithms for the comparison of pathways and PPI networks of different organisms. Finally, we discuss the challenges and the limitations of pathways and PPI network representation and analysis. We have identified that network alignment presents a lot of open problems worthy of further investigation, especially concerning pathway alignment.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Rossi, R. A., L. K. McDowell, D. W. Aha, and J. Neville. "Transforming Graph Data for Statistical Relational Learning." Journal of Artificial Intelligence Research 45 (October 30, 2012): 363–441. http://dx.doi.org/10.1613/jair.3659.

Повний текст джерела
Анотація:
Relational data representations have become an increasingly important topic due to the recent proliferation of network datasets (e.g., social, biological, information networks) and a corresponding increase in the application of Statistical Relational Learning (SRL) algorithms to these domains. In this article, we examine and categorize techniques for transforming graph-based relational data to improve SRL algorithms. In particular, appropriate transformations of the nodes, links, and/or features of the data can dramatically affect the capabilities and results of SRL algorithms. We introduce an intuitive taxonomy for data representation transformations in relational domains that incorporates link transformation and node transformation as symmetric representation tasks. More specifically, the transformation tasks for both nodes and links include (i) predicting their existence, (ii) predicting their label or type, (iii) estimating their weight or importance, and (iv) system- atically constructing their relevant features. We motivate our taxonomy through detailed examples and use it to survey competing approaches for each of these tasks. We also dis- cuss general conditions for transforming links, nodes, and features. Finally, we highlight challenges that remain to be addressed.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Zhang, Sen, Shaobo Li, Xiang Li, and Yong Yao. "Representation of Traffic Congestion Data for Urban Road Traffic Networks Based on Pooling Operations." Algorithms 13, no. 4 (April 2, 2020): 84. http://dx.doi.org/10.3390/a13040084.

Повний текст джерела
Анотація:
In order to improve the efficiency of transportation networks, it is critical to forecast traffic congestion. Large-scale traffic congestion data have become available and accessible, yet they need to be properly represented in order to avoid overfitting, reduce the requirements of computational resources, and be utilized effectively by various methodologies and models. Inspired by pooling operations in deep learning, we propose a representation framework for traffic congestion data in urban road traffic networks. This framework consists of grid-based partition of urban road traffic networks and a pooling operation to reduce multiple values into an aggregated one. We also propose using a pooling operation to calculate the maximum value in each grid (MAV). Raw snapshots of traffic congestion maps are transformed and represented as a series of matrices which are used as inputs to a spatiotemporal congestion prediction network (STCN) to evaluate the effectiveness of representation when predicting traffic congestion. STCN combines convolutional neural networks (CNNs) and long short-term memory neural network (LSTMs) for their spatiotemporal capability. CNNs can extract spatial features and dependencies of traffic congestion between roads, and LSTMs can learn their temporal evolution patterns and correlations. An empirical experiment on an urban road traffic network shows that when incorporated into our proposed representation framework, MAV outperforms other pooling operations in the effectiveness of the representation of traffic congestion data for traffic congestion prediction, and that the framework is cost-efficient in terms of computational resources.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Shcherbakov, A. V., V. G. Kharitonenko, A. I. Chuprov, and A. E. Gainov. "ENSURING DATA UNIQUENESS IN SEMANTIC NETWORKS." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 228 (June 2023): 36–40. http://dx.doi.org/10.14489/vkit.2023.06.pp.036-040.

Повний текст джерела
Анотація:
The article gives a brief description of knowledge representation models. Atoms of meaning (basic, minimal informational units) combined with each other to express a common meaning represent knowledge (data about data, metadata). It is shown that most of the existing knowledge representation models are based on the network representation model. A method is proposed to ensure the uniqueness (originality) of a set of data underlying the network model of knowledge representation. To ensure the uniqueness of knowledge representation by a set of data, the article proposes to use the main theorem of arithmetic: data are denoted by simple numbers (identifiers); when multiplying among themselves several prime numbers (data identifiers) in their totality conveying the aggregate semantic meaning, a natural number is obtained, which is an identifier of knowledge. The use of the basic theorem of arithmetic also provides a formalization of important data properties: internal interpretability, structuredness, connectivity, semantic metrics and activity. The presence of these properties in the data indicates that it is already data over data, or in other words, knowledge. In certain cases, this can reduce the computational complexity of the algorithm of the linguistic processor.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Heo, Seongsil, Sungsik Kim, and Jaekoo Lee. "BIMO: Bootstrap Inter–Intra Modality at Once Unsupervised Learning for Multivariate Time Series." Applied Sciences 14, no. 9 (April 30, 2024): 3825. http://dx.doi.org/10.3390/app14093825.

Повний текст джерела
Анотація:
It is difficult to learn meaningful representations of time-series data since they are sparsely labeled and unpredictable. Hence, we propose bootstrap inter–intra modality at once (BIMO), an unsupervised representation learning method based on time series. Unlike previous works, the proposed BIMO method learns both inter-sample and intra-temporal modality representations simultaneously without negative pairs. BIMO comprises a main network and two auxiliary networks, namely inter-auxiliary and intra-auxiliary networks. The main network is trained to learn inter–intra modality representations sequentially by regulating the use of each auxiliary network dynamically. Thus, BIMO thoroughly learns inter–intra modality representations simultaneously. The experimental results demonstrate that the proposed BIMO method outperforms the state-of-the-art unsupervised methods and achieves comparable performance to existing supervised methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Idiart, Marco, Barry Berk, and L. F. Abbott. "Reduced Representation by Neural Networks with Restricted Receptive Fields." Neural Computation 7, no. 3 (May 1995): 507–17. http://dx.doi.org/10.1162/neco.1995.7.3.507.

Повний текст джерела
Анотація:
Model neural networks can perform dimensional reductions of input data sets using correlation-based learning rules to adjust their weights. Simple Hebbian learning rules lead to an optimal reduction at the single unit level but result in highly redundant network representations. More complex rules designed to reduce or remove this redundancy can develop optimal principal component representations, but they are not very compelling from a biological perspective. Neurons in biological networks have restricted receptive fields limiting their access to the input data space. We find that, within this restricted receptive field architecture, simple correlation-based learning rules can produce surprisingly efficient reduced representations. When noise is present, the size of the receptive fields can be optimally tuned to maximize the accuracy of reconstructions of input data from a reduced representation.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Bautista, John Lorenzo, Yun Kyung Lee, and Hyun Soon Shin. "Speech Emotion Recognition Based on Parallel CNN-Attention Networks with Multi-Fold Data Augmentation." Electronics 11, no. 23 (November 28, 2022): 3935. http://dx.doi.org/10.3390/electronics11233935.

Повний текст джерела
Анотація:
In this paper, an automatic speech emotion recognition (SER) task of classifying eight different emotions was experimented using parallel based networks trained using the Ryeson Audio-Visual Dataset of Speech and Song (RAVDESS) dataset. A combination of a CNN-based network and attention-based networks, running in parallel, was used to model both spatial features and temporal feature representations. Multiple Augmentation techniques using Additive White Gaussian Noise (AWGN), SpecAugment, Room Impulse Response (RIR), and Tanh Distortion techniques were used to augment the training data to further generalize the model representation. Raw audio data were transformed into Mel-Spectrograms as the model’s input. Using CNN’s proven capability in image classification and spatial feature representations, the spectrograms were treated as an image with the height and width represented by the spectrogram’s time and frequency scales. Temporal feature representations were represented by attention-based models Transformer, and BLSTM-Attention modules. Proposed architectures of the parallel CNN-based networks running along with Transformer and BLSTM-Attention modules were compared with standalone CNN architectures and attention-based networks, as well as with hybrid architectures with CNN layers wrapped in time-distributed wrappers stacked on attention-based networks. In these experiments, the highest accuracy of 89.33% for a Parallel CNN-Transformer network and 85.67% for a Parallel CNN-BLSTM-Attention Network were achieved on a 10% hold-out test set from the dataset. These networks showed promising results based on their accuracies, while keeping significantly less training parameters compared with non-parallel hybrid models.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Liu, Hao, Jindong Han, Yanjie Fu, Jingbo Zhou, Xinjiang Lu, and Hui Xiong. "Multi-modal transportation recommendation with unified route representation learning." Proceedings of the VLDB Endowment 14, no. 3 (November 2020): 342–50. http://dx.doi.org/10.14778/3430915.3430924.

Повний текст джерела
Анотація:
Multi-modal transportation recommendation aims to provide the most appropriate travel route with various transportation modes according to certain criteria. After analyzing large-scale navigation data, we find that route representations exhibit two patterns: spatio-temporal autocorrelations within transportation networks and the semantic coherence of route sequences. However, there are few studies that consider both patterns when developing multi-modal transportation systems. To this end, in this paper, we study multi-modal transportation recommendation with unified route representation learning by exploiting both spatio-temporal dependencies in transportation networks and the semantic coherence of historical routes. Specifically, we propose to unify both dynamic graph representation learning and hierarchical multi-task learning for multi-modal transportation recommendations. Along this line, we first transform the multi-modal transportation network into time-dependent multi-view transportation graphs and propose a spatiotemporal graph neural network module to capture the spatial and temporal autocorrelation. Then, we introduce a coherent-aware attentive route representation learning module to project arbitrary-length routes into fixed-length representation vectors, with explicit modeling of route coherence from historical routes. Moreover, we develop a hierarchical multi-task learning module to differentiate route representations for different transport modes, and this is guided by the final recommendation feedback as well as multiple auxiliary tasks equipped in different network layers. Extensive experimental results on two large-scale real-world datasets demonstrate the performance of the proposed system outperforms eight baselines.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Zhang, Kainan, Zhipeng Cai, and Daehee Seo. "Privacy-Preserving Federated Graph Neural Network Learning on Non-IID Graph Data." Wireless Communications and Mobile Computing 2023 (February 3, 2023): 1–13. http://dx.doi.org/10.1155/2023/8545101.

Повний текст джерела
Анотація:
Since the concept of federated learning (FL) was proposed by Google in 2017, many applications have been combined with FL technology due to its outstanding performance in data integration, computing performance, privacy protection, etc. However, most traditional federated learning-based applications focus on image processing and natural language processing with few achievements in graph neural networks due to the graph’s nonindependent identically distributed (IID) nature. Representation learning on graph-structured data generates graph embedding, which helps machines understand graphs effectively. Meanwhile, privacy protection plays a more meaningful role in analyzing graph-structured data such as social networks. Hence, this paper proposes PPFL-GNN, a novel privacy-preserving federated graph neural network framework for node representation learning, which is a pioneer work for graph neural network-based federated learning. In PPFL-GNN, clients utilize a local graph dataset to generate graph embeddings and integrate information from other collaborative clients to utilize federated learning to produce more accurate representation results. More importantly, by integrating embedding alignment techniques in PPFL-GNN, we overcome the obstacles of federated learning on non-IID graph data and can further reduce privacy exposure by sharing preferred information.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Wang, Jing, Songhe Feng, Gengyu Lyu, and Jiazheng Yuan. "SURER: Structure-Adaptive Unified Graph Neural Network for Multi-View Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15520–27. http://dx.doi.org/10.1609/aaai.v38i14.29478.

Повний текст джерела
Анотація:
Deep Multi-view Graph Clustering (DMGC) aims to partition instances into different groups using the graph information extracted from multi-view data. The mainstream framework of DMGC methods applies graph neural networks to embed structure information into the view-specific representations and fuse them for the consensus representation. However, on one hand, we find that the graph learned in advance is not ideal for clustering as it is constructed by original multi-view data and localized connecting. On the other hand, most existing methods learn the consensus representation in a late fusion manner, which fails to propagate the structure relations across multiple views. Inspired by the observations, we propose a Structure-adaptive Unified gRaph nEural network for multi-view clusteRing (SURER), which can jointly learn a heterogeneous multi-view unified graph and robust graph neural networks for multi-view clustering. Specifically, we first design a graph structure learning module to refine the original view-specific attribute graphs, which removes false edges and discovers the potential connection. According to the view-specific refined attribute graphs, we integrate them into a unified heterogeneous graph by linking the representations of the same sample from different views. Furthermore, we use the unified heterogeneous graph as the input of the graph neural network to learn the consensus representation for each instance, effectively integrating complementary information from various views. Extensive experiments on diverse datasets demonstrate the superior effectiveness of our method compared to other state-of-the-art approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Poulton, Mary M., Ben K. Sternberg, and Charles E. Glass. "Location of subsurface targets in geophysical data using neural networks." GEOPHYSICS 57, no. 12 (December 1992): 1534–44. http://dx.doi.org/10.1190/1.1443221.

Повний текст джерела
Анотація:
Neural networks were used to estimate the offset, depth, and conductivity‐area product of a conductive target given an electromagnetic ellipticity image of the target. Five different neural network paradigms and five different representations of the ellipticity image were compared. The networks were trained with synthetic images of the target and tested on field data and more synthetic data. The extrapolation capabilities of the networks were also tested with synthetic data lying outside the spatial limits of the training set. The data representations consisted of the whole image, the subsampled image, the peak and adjacent troughs, the peak, and components from a two‐dimensional (2-D) fast Fourier transform. The paradigms tested were standard back propagation, directed random search, functional link, extended delta bar delta, and the hybrid combination of self‐organizing map and back propagation. For input patterns with less than 100 elements, the directed random search and functional link networks gave the best results. For patterns with more than 100 elements, self‐organizing map to back propagation was most accurate. Using the whole ellipticity image gave the most accurate results for all the network paradigms. The fast Fourier transform data representation also yielded good results with a much faster computation time. Average accuracies of offset, depth, and conductivity‐area product as high as 97 percent could be achieved for test and field data and 88 percent for extrapolation data.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Bartsev, S. I., P. M. Baturina, and G. M. Markova. "Neural Network-Based Decoding Input Stimulus Data Based on Recurrent Neural Network Neural Activity Pattern." Doklady Biological Sciences 502, no. 1 (March 17, 2022): 1–5. http://dx.doi.org/10.1134/s001249662201001x.

Повний текст джерела
Анотація:
Abstract The paper reports the assessment of the possibility to recover information obtained using an artificial neural network via inspecting neural activity patterns. A simple recurrent neural network forms dynamic excitation patterns for storing data on input stimulus in the course of the advanced delayed match to sample test with varying duration of pause between the received stimuli. Information stored in these patterns can be used by the neural network at any moment within the specified interval (three to six clock cycles), whereby it appears possible to detect invariant representation of received stimulus. To identify these representations, the neural network-based decoding method that shows 100% efficiency of received stimuli recognition has been suggested. This method allows for identification the minimum subset of neurons, the excitation pattern of which contains comprehensive information about the stimulus received by the neural network.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Liu, Xinlong, Chu He, Dehui Xiong, and Mingsheng Liao. "Pattern Statistics Network for Classification of High-Resolution SAR Images." Remote Sensing 11, no. 16 (August 20, 2019): 1942. http://dx.doi.org/10.3390/rs11161942.

Повний текст джерела
Анотація:
The classification of synthetic aperture radar (SAR) images is of great importance for rapid scene understanding. Recently, convolutional neural networks (CNNs) have been applied to the classification of single-polarized SAR images. However, it is still difficult due to the random and complex spatial patterns lying in SAR images, especially in the case of finite training data. In this paper, a pattern statistics network (PSNet) is proposed to address this problem. PSNet borrows the idea from the statistics and probability theory and explicitly embeds the random nature of SAR images in the representation learning. In the PSNet, both fluctuation and pattern representations are extracted for SAR images. More specifically, the fluctuation representation does not consider the rigorous relationships between local pixels and only describes the average fluctuation of local pixels. By contrast, the pattern representation is devoted to hierarchically capturing the interactions between local pixels, namely, the spatial patterns of SAR images. The proposed PSNet is evaluated on three real SAR data, including spaceborne and airborne data. The experimental results indicate that the fluctuation representation is useful and PSNet achieves superior performance in comparison with related CNN-based and texture-based methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Ye, Zhonglin, Haixing Zhao, Ke Zhang, and Yu Zhu. "Multi-View Network Representation Learning Algorithm Research." Algorithms 12, no. 3 (March 12, 2019): 62. http://dx.doi.org/10.3390/a12030062.

Повний текст джерела
Анотація:
Network representation learning is a key research field in network data mining. In this paper, we propose a novel multi-view network representation algorithm (MVNR), which embeds multi-scale relations of network vertices into the low dimensional representation space. In contrast to existing approaches, MVNR explicitly encodes higher order information using k-step networks. In addition, we introduce the matrix forest index as a kind of network feature, which can be applied to balance the representation weights of different network views. We also research the relevance amongst MVNR and several excellent research achievements, including DeepWalk, node2vec and GraRep and so forth. We conduct our experiment on several real-world citation datasets and demonstrate that MVNR outperforms some new approaches using neural matrix factorization. Specifically, we demonstrate the efficiency of MVNR on network classification, visualization and link prediction tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Sun, Hanlin, Wei Jie, Jonathan Loo, Liang Chen, Zhongmin Wang, Sugang Ma, Gang Li, and Shuai Zhang. "Network Representation Learning Enhanced by Partial Community Information That Is Found Using Game Theory." Information 12, no. 5 (April 25, 2021): 186. http://dx.doi.org/10.3390/info12050186.

Повний текст джерела
Анотація:
Presently, data that are collected from real systems and organized as information networks are universal. Mining hidden information from these data is generally helpful to understand and benefit the corresponding systems. The challenges of analyzing such data include high computational complexity and low parallelizability because of the nature of complicated interconnected structure of their nodes. Network representation learning, also called network embedding, provides a practical and promising way to solve these issues. One of the foremost requirements of network embedding is preserving network topology properties in learned low-dimension representations. Community structure is a prominent characteristic of complex networks and thus should be well maintained. However, the difficulty lies in the fact that the properties of community structure are multivariate and complicated; therefore, it is insufficient to model community structure using a predefined model, the way that is popular in most state-of-the-art network embedding algorithms explicitly considering community structure preservation. In this paper, we introduce a multi-process parallel framework for network embedding that is enhanced by found partial community information and can preserve community properties well. We also implement the framework and propose two node embedding methods that use game theory for detecting partial community information. A series of experiments are conducted to evaluate the performance of our methods and six state-of-the-art algorithms. The results demonstrate that our methods can effectively preserve community properties of networks in their low-dimension representations. Specifically, compared to the involved baselines, our algorithms behave the best and are the runners-up on networks with high overlapping diversity and density.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Monterubbiano, Andrea, Raphael Azorin, Gabriele Castellano, Massimo Gallo, Salvatore Pontarelli, and Dario Rossi. "SPADA: A Sparse Approximate Data Structure Representation for Data Plane Per-flow Monitoring." Proceedings of the ACM on Networking 1, CoNEXT3 (November 27, 2023): 1–25. http://dx.doi.org/10.1145/3629149.

Повний текст джерела
Анотація:
Accurate per-flow monitoring is critical for precise network diagnosis, performance analysis, and network operation and management in general. However, the limited amount of memory available on modern programmable devices and the large number of active flows force practitioners to monitor only the most relevant flows with approximate data structures, limiting their view of network traffic. We argue that, due to the skewed nature of network traffic, such data structures are, in practice, heavily underutilized, i.e. sparse, thus wasting a significant amount of memory. This paper proposes a Sparse Approximate Data Structure (SPADA) representation that leverages sparsity to reduce the memory footprint of per-flow monitoring systems in the data plane while preserving their original accuracy. SPADA representation can be integrated into a generic per-flow monitoring system and is suitable for several measurement use cases. We prototype SPADA in P4 for a commercial FPGA target and test our approach with a custom simulator that we make publicly available, on four real network traces over three different monitoring tasks. Our results show that SPADA achieves 2× to 11× memory footprint reduction with respect to the state-of-the-art while maintaining the same accuracy, or even improving it.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Kominakis, A. P. "Graph analysis of animals' pedigrees." Archives Animal Breeding 44, no. 5 (October 10, 2001): 521–30. http://dx.doi.org/10.5194/aab-44-521-2001.

Повний текст джерела
Анотація:
Abstract. In the present work an attempt to apply graph analysis on visual representations of animals' pedigrees is presented. Analysis of pedigree networks of moderate size (tens or hundreds of points) can substantially contribute to revealing the relational structures between animals. Partitioning graphic representations of pedigree networks to smaller parts (blocks) by means of network decomposition methods resulted in better handling and understanding of horse genealogical data. Analysis of pedigree networks could be used to estimate shortest kinship paths among animals, determinate all predecessors and successors of selected animals and finally to estimate inbreeding coefficients of selected individuals. Detection of families and animals with major gene contribution could substantially be facilitated. Graphic representation of pedigree networks provide simultaneous, dynamic and parsimonious representation of kinship, interrelations and constituent structures.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Tu, Wenxuan, Sihang Zhou, Xinwang Liu, Xifeng Guo, Zhiping Cai, En Zhu, and Jieren Cheng. "Deep Fusion Clustering Network." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9978–87. http://dx.doi.org/10.1609/aaai.v35i11.17198.

Повний текст джерела
Анотація:
Deep clustering is a fundamental yet challenging task for data analysis. Recently we witness a strong tendency of combining autoencoder and graph neural networks to exploit structure information for clustering performance enhancement. However, we observe that existing literature 1) lacks a dynamic fusion mechanism to selectively integrate and refine the information of graph structure and node attributes for consensus representation learning; 2) fails to extract information from both sides for robust target distribution (i.e., “groundtruth” soft labels) generation. To tackle the above issues, we propose a Deep Fusion Clustering Network (DFCN). Specifically, in our network, an interdependency learning-based Structure and Attribute Information Fusion (SAIF) module is proposed to explicitly merge the representations learned by an autoencoder and a graph autoencoder for consensus representation learning. Also, a reliable target distribution generation measure and a triplet self-supervision strategy, which facilitate cross-modality information exploitation, are designed for network training. Extensive experiments on six benchmark datasets have demonstrated that the proposed DFCN consistently outperforms the state-of-the-art deep clustering methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Tian, Hao, and Reza Zafarani. "Higher-Order Networks Representation and Learning: A Survey." ACM SIGKDD Explorations Newsletter 26, no. 1 (July 24, 2024): 1–18. http://dx.doi.org/10.1145/3682112.3682114.

Повний текст джерела
Анотація:
Network data has become widespread, larger, and more complex over the years. Traditional network data is dyadic, capturing the relations among pairs of entities. With the need to model interactions among more than two entities, significant research has focused on higher-order networks and ways to represent, analyze, and learn from them. There are two main directions to studying higher-order networks. One direction has focused on capturing higher-order patterns in traditional (dyadic) graphs by changing the basic unit of study from nodes to small frequently observed subgraphs, called motifs. As most existing network data comes in the form of pairwise dyadic relationships, studying higher-order structures within such graphs may uncover new insights. The second direction aims to directly model higher-order interactions using new and more complex representations such as simplicial complexes or hypergraphs. Some of these models have long been proposed, but improvements in computational power and the advent of new computational techniques have increased their popularity. Our goal in this paper is to provide a succinct yet comprehensive summary of the advanced higher-order network analysis techniques. We provide a systematic review of the foundations and algorithms, along with use cases and applications of higher-order networks in various scientific domains.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Esser, Pascal, Maximilian Fleissner, and Debarghya Ghoshdastidar. "Non-parametric Representation Learning with Kernels." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 11910–18. http://dx.doi.org/10.1609/aaai.v38i11.29077.

Повний текст джерела
Анотація:
Unsupervised and self-supervised representation learning has become popular in recent years for learning useful features from unlabelled data. Representation learning has been mostly developed in the neural network literature, and other models for representation learning are surprisingly unexplored. In this work, we introduce and analyze several kernel-based representation learning approaches: Firstly, we define two kernel Self-Supervised Learning (SSL) models using contrastive loss functions and secondly, a Kernel Autoencoder (AE) model based on the idea of embedding and reconstructing data. We argue that the classical representer theorems for supervised kernel machines are not always applicable for (self-supervised) representation learning, and present new representer theorems, which show that the representations learned by our kernel models can be expressed in terms of kernel matrices. We further derive generalisation error bounds for representation learning with kernel SSL and AE, and empirically evaluate the performance of these methods in both small data regimes as well as in comparison with neural network based models.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Jing, Dongsheng, Yu Yang, Zhimin Gu, Renjun Feng, Yan Li, and Haitao Jiang. "Multi-Feature Fusion in Graph Convolutional Networks for Data Network Propagation Path Tracing." Electronics 13, no. 17 (August 28, 2024): 3412. http://dx.doi.org/10.3390/electronics13173412.

Повний текст джерела
Анотація:
With the rapid development of information technology, the complexity of data networks is increasing, especially in electric power systems, where data security and privacy protection are of great importance. Throughout the entire distribution process of the supply chain, it is crucial to closely monitor the propagation paths and dynamics of electrical data to ensure security and quickly initiate comprehensive traceability investigations if any data tampering is detected. This research addresses the challenges of data network complexity and its impact on the security of power systems by proposing an innovative data network propagation path tracing model, which is constructed based on graph convolutional networks (GCNs) and the BERT model. Firstly, propagation trees are constructed based on the propagation structure, and the key attributes of data nodes are extracted and screened. Then, GCNs are utilized to learn the representation of node features with different attribute feature combinations in the propagation path graph, while the Bidirectional Encoder Representations from Transformers (BERT) model is employed to capture the deep semantic features of the original text content. The core of this research is to effectively integrate these two feature representations, namely the structural features obtained by GCNs and the semantic features obtained by the BERT model, in order to enhance the ability of the model to recognize the data propagation path. The experimental results demonstrate that this model performs well in power data propagation and tracing tasks, and the data recognition accuracy reaches 92.5%, which is significantly better than the existing schemes. This achievement not only improves the power system’s ability to cope with data security threats but also provides strong support for protecting data transmission security and privacy.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії