Letteratura scientifica selezionata sul tema "Network data representation"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Network data representation".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Network data representation"

1

R.Tamilarasu and G. Soundarya Devi. "Improvising Connection In 5g By Means Of Particle Swarm Optimization Techniques." South Asian Journal of Engineering and Technology 14, no. 2 (2024): 1–6. http://dx.doi.org/10.26524/sajet.2023.14.2.

Testo completo
Abstract (sommario):
Data and network embedding techniques are essential for representing complex data structures in a lower-dimensional space, aiding in tasks like data inference and network reconstruction by assigning nodes to concise representations while preserving the network's structure. The integration of Particle Swarm Optimization (PSO) with matrix factorization methods optimizes mapping functions and parameters during the embedding process, enhancing representation learning efficiency. Combining PSO with techniques like Deep Walk highlights its adaptability as a robust optimization tool for extracting me
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Ye, Zhonglin, Haixing Zhao, Ke Zhang, Yu Zhu, and Zhaoyang Wang. "An Optimized Network Representation Learning Algorithm Using Multi-Relational Data." Mathematics 7, no. 5 (2019): 460. http://dx.doi.org/10.3390/math7050460.

Testo completo
Abstract (sommario):
Representation learning aims to encode the relationships of research objects into low-dimensional, compressible, and distributed representation vectors. The purpose of network representation learning is to learn the structural relationships between network vertices. Knowledge representation learning is oriented to model the entities and relationships in knowledge bases. In this paper, we first introduce the idea of knowledge representation learning into network representation learning, namely, we propose a new approach to model the vertex triplet relationships based on DeepWalk without TransE.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Armenta, Marco, and Pierre-Marc Jodoin. "The Representation Theory of Neural Networks." Mathematics 9, no. 24 (2021): 3216. http://dx.doi.org/10.3390/math9243216.

Testo completo
Abstract (sommario):
In this work, we show that neural networks can be represented via the mathematical theory of quiver representations. More specifically, we prove that a neural network is a quiver representation with activation functions, a mathematical object that we represent using a network quiver. Furthermore, we show that network quivers gently adapt to common neural network concepts such as fully connected layers, convolution operations, residual connections, batch normalization, pooling operations and even randomly wired neural networks. We show that this mathematical representation is by no means an app
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Aristizábal Q, Luz Angela, and Nicolás Toro G. "Multilayer Representation and Multiscale Analysis on Data Networks." International journal of Computer Networks & Communications 13, no. 3 (2021): 41–55. http://dx.doi.org/10.5121/ijcnc.2021.13303.

Testo completo
Abstract (sommario):
The constant increase in the complexity of data networks motivates the search for strategies that make it possible to reduce current monitoring times. This paper shows the way in which multilayer network representation and the application of multiscale analysis techniques, as applied to software-defined networks, allows for the visualization of anomalies from "coarse views of the network topology". This implies the analysis of fewer data, and consequently the reduction of the time that a process takes to monitor the network. The fact that software-defined networks allow for the obtention of a
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Nguyễn, Tuấn, Nguyen Hai Hao, Dang Le Dinh Trang, Nguyen Van Tuan, and Cao Van Loi. "Robust anomaly detection methods for contamination network data." Journal of Military Science and Technology, no. 79 (May 19, 2022): 41–51. http://dx.doi.org/10.54939/1859-1043.j.mst.79.2022.41-51.

Testo completo
Abstract (sommario):
Recently, latent representation models, such as Shrink Autoencoder (SAE), have been demonstrated as robust feature representations for one-class learning-based network anomaly detection. In these studies, benchmark network datasets that are processed in laboratory environments to make them completely clean are often employed for constructing and evaluating such models. In real-world scenarios, however, we can not guarantee 100% to collect pure normal data for constructing latent representation models. Therefore, this work aims to investigate the characteristics of the latent representation of
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Du, Xin, Yulong Pei, Wouter Duivesteijn, and Mykola Pechenizkiy. "Fairness in Network Representation by Latent Structural Heterogeneity in Observational Data." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3809–16. http://dx.doi.org/10.1609/aaai.v34i04.5792.

Testo completo
Abstract (sommario):
While recent advances in machine learning put many focuses on fairness of algorithmic decision making, topics about fairness of representation, especially fairness of network representation, are still underexplored. Network representation learning learns a function mapping nodes to low-dimensional vectors. Structural properties, e.g. communities and roles, are preserved in the latent embedding space. In this paper, we argue that latent structural heterogeneity in the observational data could bias the classical network representation model. The unknown heterogeneous distribution across subgroup
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Dongming Chen, Dongming Chen, Mingshuo Nie Dongming Chen, Jiarui Yan Mingshuo Nie, Jiangnan Meng Jiarui Yan, and Dongqi Wang Jiangnan Meng. "Network Representation Learning Algorithm Based on Community Folding." 網際網路技術學刊 23, no. 2 (2022): 415–23. http://dx.doi.org/10.53106/160792642022032302020.

Testo completo
Abstract (sommario):
<p>Network representation learning is a machine learning method that maps network topology and node information into low-dimensional vector space, which can reduce the temporal and spatial complexity of downstream network data mining such as node classification and graph clustering. This paper addresses the problem that neighborhood information-based network representation learning algorithm ignores the global topological information of the network. We propose the Network Representation Learning Algorithm Based on Community Folding (CF-NRL) considering the influence of community structur
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Zhang, Xiaoxian, Jianpei Zhang, and Jing Yang. "Large-scale dynamic social data representation for structure feature learning." Journal of Intelligent & Fuzzy Systems 39, no. 4 (2020): 5253–62. http://dx.doi.org/10.3233/jifs-189010.

Testo completo
Abstract (sommario):
The problems caused by network dimension disasters and computational complexity have become an important issue to be solved in the field of social network research. The existing methods for network feature learning are mostly based on static and small-scale assumptions, and there is no modified learning for the unique attributes of social networks. Therefore, existing learning methods cannot adapt to the dynamic and large-scale of current social networks. Even super large scale and other features. This paper mainly studies the feature representation learning of large-scale dynamic social netwo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Kapoor, Maya, Michael Napolitano, Jonathan Quance, Thomas Moyer, and Siddharth Krishnan. "Detecting VoIP Data Streams: Approaches Using Hidden Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (2023): 15519–27. http://dx.doi.org/10.1609/aaai.v37i13.26840.

Testo completo
Abstract (sommario):
The use of voice-over-IP technology has rapidly expanded over the past several years, and has thus become a significant portion of traffic in the real, complex network environment. Deep packet inspection and middlebox technologies need to analyze call flows in order to perform network management, load-balancing, content monitoring, forensic analysis, and intelligence gathering. Because the session setup and management data can be sent on different ports or out of sync with VoIP call data over the Real-time Transport Protocol (RTP) with low latency, inspection software may miss calls or parts o
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Giannarakis, Nick, Alexandra Silva, and David Walker. "ProbNV: probabilistic verification of network control planes." Proceedings of the ACM on Programming Languages 5, ICFP (2021): 1–30. http://dx.doi.org/10.1145/3473595.

Testo completo
Abstract (sommario):
ProbNV is a new framework for probabilistic network control plane verification that strikes a balance between generality and scalability. ProbNV is general enough to encode a wide range of features from the most common protocols (eBGP and OSPF) and yet scalable enough to handle challenging properties, such as probabilistic all-failures analysis of medium-sized networks with 100-200 devices. When there are a small, bounded number of failures, networks with up to 500 devices may be verified in seconds. ProbNV operates by translating raw CISCO configurations into a probabilistic and functional pr
Gli stili APA, Harvard, Vancouver, ISO e altri
Più fonti
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!