Journal articles on the topic 'Neural Network Embeddings'

To see the other types of publications on this topic, follow the link: Neural Network Embeddings.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neural Network Embeddings.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Che, Feihu, Dawei Zhang, Jianhua Tao, Mingyue Niu, and Bocheng Zhao. "ParamE: Regarding Neural Network Parameters as Relation Embeddings for Knowledge Graph Completion." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2774–81. http://dx.doi.org/10.1609/aaai.v34i03.5665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We study the task of learning entity and relation embeddings in knowledge graphs for predicting missing links. Previous translational models on link prediction make use of translational properties but lack enough expressiveness, while the convolution neural network based model (ConvE) takes advantage of the great nonlinearity fitting ability of neural networks but overlooks translational properties. In this paper, we propose a new knowledge graph embedding model called ParamE which can utilize the two advantages together. In ParamE, head entity embeddings, relation embeddings and tail entity embeddings are regarded as the input, parameters and output of a neural network respectively. Since parameters in networks are effective in converting input to output, taking neural network parameters as relation embeddings makes ParamE much more expressive and translational. In addition, the entity and relation embeddings in ParamE are from feature space and parameter space respectively, which is in line with the essence that entities and relations are supposed to be mapped into two different spaces. We evaluate the performances of ParamE on standard FB15k-237 and WN18RR datasets, and experiments show ParamE can significantly outperform existing state-of-the-art models, such as ConvE, SACN, RotatE and D4-STE/Gumbel.
2

Huang, Junjie, Huawei Shen, Liang Hou, and Xueqi Cheng. "SDGNN: Learning Node Representation for Signed Directed Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (May 18, 2021): 196–203. http://dx.doi.org/10.1609/aaai.v35i1.16093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Network embedding is aimed at mapping nodes in a network into low-dimensional vector representations. Graph Neural Networks (GNNs) have received widespread attention and lead to state-of-the-art performance in learning node representations. However, most GNNs only work in unsigned networks, where only positive links exist. It is not trivial to transfer these models to signed directed networks, which are widely observed in the real world yet less studied. In this paper, we first review two fundamental sociological theories (i.e., status theory and balance theory) and conduct empirical studies on real-world datasets to analyze the social mechanism in signed directed networks. Guided by related socio- logical theories, we propose a novel Signed Directed Graph Neural Networks model named SDGNN to learn node embeddings for signed directed networks. The proposed model simultaneously reconstructs link signs, link directions, and signed directed triangles. We validate our model’s effectiveness on five real-world datasets, which are commonly used as the benchmark for signed network embeddings. Experiments demonstrate the proposed model outperforms existing models, including feature-based methods, network embedding methods, and several GNN methods.
3

Srinidhi, K., T. L.S Tejaswi, CH Rama Rupesh Kumar, and I. Sai Siva Charan. "An Advanced Sentiment Embeddings with Applications to Sentiment Based Result Analysis." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 393. http://dx.doi.org/10.14419/ijet.v7i2.32.15721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We propose an advanced well-trained sentiment analysis based adoptive analysis “word specific embedding’s, dubbed sentiment embedding’s”. Using available word and phrase embedded learning and trained algorithms mainly make use of contexts of terms but ignore the sentiment of texts and analyzing the process of word and text classifications. sentimental analysis on unlike words conveying same meaning matched to corresponding word vector. This problem is bridged by combining encoding opinion carrying text with sentiment embeddings words. But performing sentimental analysis on e-commerce, social networking sites we developed neural network based algorithms along with tailoring and loss function which carry feelings. This research apply embedding’s to word-level, sentence-level sentimental analysis and classification, constructing sentiment oriented lexicons. Experimental analysis and results addresses that sentiment embedding techniques outperform the context-based embedding’s on many distributed data sets. This work provides familiarity about neural networks techniques for learning word embedding’s in other NLP tasks.
4

Armandpour, Mohammadreza, Patrick Ding, Jianhua Huang, and Xia Hu. "Robust Negative Sampling for Network Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3191–98. http://dx.doi.org/10.1609/aaai.v33i01.33013191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many recent network embedding algorithms use negative sampling (NS) to approximate a variant of the computationally expensive Skip-Gram neural network architecture (SGA) objective. In this paper, we provide theoretical arguments that reveal how NS can fail to properly estimate the SGA objective, and why it is not a suitable candidate for the network embedding problem as a distinct objective. We show NS can learn undesirable embeddings, as the result of the “Popular Neighbor Problem.” We use the theory to develop a new method “R-NS” that alleviates the problems of NS by using a more intelligent negative sampling scheme and careful penalization of the embeddings. R-NS is scalable to large-scale networks, and we empirically demonstrate the superiority of R-NS over NS for multi-label classification on a variety of real-world networks including social networks and language networks.
5

Kamath, S., K. G. Karibasappa, Anvitha Reddy, Arati M. Kallur, B. B. Priyanka, and B. P. Bhagya. "Improving the Relation Classification Using Convolutional Neural Network." IOP Conference Series: Materials Science and Engineering 1187, no. 1 (September 1, 2021): 012004. http://dx.doi.org/10.1088/1757-899x/1187/1/012004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Relation extraction has been the emerging research topic in the field of Natural Language Processing. The proposed work classifies the relations among the data considering the semantic relevance of words using word2vec embeddings towards training the convolutional neural network. We intended to use the semantic relevance of the words in the document to enrich the learning of the embeddings for improved classification. We designed a framework to automatically extract the relations between the entities using deep learning techniques. The framework includes pre-processing, extracting the feature vectors using word2vec embedding, and classification using convolutional neural networks. We perform extensive experimentation using benchmark datasets and show improved classification accuracy in comparison with the state-of-the-art methodologies using appropriate methods and also including the additional relations.
6

Gu, Haishuo, Jinguang Sui, and Peng Chen. "Graph Representation Learning for Street-Level Crime Prediction." ISPRS International Journal of Geo-Information 13, no. 7 (July 1, 2024): 229. http://dx.doi.org/10.3390/ijgi13070229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In contemporary research, the street network emerges as a prominent and recurring theme in crime prediction studies. Meanwhile, graph representation learning shows considerable success, which motivates us to apply the methodology to crime prediction research. In this article, a graph representation learning approach is utilized to derive topological structure embeddings within the street network. Subsequently, a heterogeneous information network that incorporates both the street network and urban facilities is constructed, and embeddings through link prediction tasks are obtained. Finally, the two types of high-order embeddings, along with other spatio-temporal features, are fed into a deep neural network for street-level crime prediction. The proposed framework is tested using data from Beijing, and the outcomes demonstrate that both types of embeddings have a positive impact on crime prediction, with the second embedding showing a more significant contribution. Comparative experiments indicate that the proposed deep neural network offers superior efficiency in crime prediction.
7

Zhang, Lei, Feng Qian, Jie Chen, and Shu Zhao. "An Unsupervised Rapid Network Alignment Framework via Network Coarsening." Mathematics 11, no. 3 (January 21, 2023): 573. http://dx.doi.org/10.3390/math11030573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Network alignment aims to identify the correspondence of nodes between two or more networks. It is the cornerstone of many network mining tasks, such as cross-platform recommendation and cross-network data aggregation. Recently, with the development of network representation learning techniques, researchers have proposed many embedding-based network alignment methods. The effect is better than traditional methods. However, several issues and challenges remain for network alignment tasks, such as lack of labeled data, mapping across network embedding spaces, and computational efficiency. Based on the graph neural network (GNN), we propose the URNA (unsupervised rapid network alignment) framework to achieve an effective balance between accuracy and efficiency. There are two phases: model training and network alignment. We exploit coarse networks to accelerate the training of GNN after first compressing the original networks into small networks. We also use parameter sharing to guarantee the consistency of embedding spaces and an unsupervised loss function to update the parameters. In the network alignment phase, we first use a once-pass forward propagation to learn node embeddings of original networks, and then we use multi-order embeddings from the outputs of all convolutional layers to calculate the similarity of nodes between the two networks via vector inner product for alignment. Experimental results on real-world datasets show that the proposed method can significantly reduce running time and memory requirements while guaranteeing alignment performance.
8

Truică, Ciprian-Octavian, Elena-Simona Apostol, Maria-Luiza Șerban, and Adrian Paschke. "Topic-Based Document-Level Sentiment Analysis Using Contextual Cues." Mathematics 9, no. 21 (October 27, 2021): 2722. http://dx.doi.org/10.3390/math9212722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Document-level Sentiment Analysis is a complex task that implies the analysis of large textual content that can incorporate multiple contradictory polarities at the phrase and word levels. Most of the current approaches either represent textual data using pre-trained word embeddings without considering the local context that can be extracted from the dataset, or they detect the overall topic polarity without considering both the local and global context. In this paper, we propose a novel document-topic embedding model, DocTopic2Vec, for document-level polarity detection in large texts by employing general and specific contextual cues obtained through the use of document embeddings (Doc2Vec) and Topic Modeling. In our approach, (1) we use a large dataset with game reviews to create different word embeddings by applying Word2Vec, FastText, and GloVe, (2) we create Doc2Vecs enriched with the local context given by the word embeddings for each review, (3) we construct topic embeddings Topic2Vec using three Topic Modeling algorithms, i.e., LDA, NMF, and LSI, to enhance the global context of the Sentiment Analysis task, (4) for each document and its dominant topic, we build the new DocTopic2Vec by concatenating the Doc2Vec with the Topic2Vec created with the same word embedding. We also design six new Convolutional-based (Bidirectional) Recurrent Deep Neural Network Architectures that show promising results for this task. The proposed DocTopic2Vecs are used to benchmark multiple Machine and Deep Learning models, i.e., a Logistic Regression model, used as a baseline, and 18 Deep Neural Networks Architectures. The experimental results show that the new embedding and the new Deep Neural Network Architectures achieve better results than the baseline, i.e., Logistic Regression and Doc2Vec.
9

Jang, Youngjin, and Harksoo Kim. "Reliable Classification of FAQs with Spelling Errors Using an Encoder-Decoder Neural Network in Korean." Applied Sciences 9, no. 22 (November 7, 2019): 4758. http://dx.doi.org/10.3390/app9224758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To resolve lexical disagreement problems between queries and frequently asked questions (FAQs), we propose a reliable sentence classification model based on an encoder-decoder neural network. The proposed model uses three types of word embeddings; fixed word embeddings for representing domain-independent meanings of words, fined-tuned word embeddings for representing domain-specific meanings of words, and character-level word embeddings for bridging lexical gaps caused by spelling errors. It also uses class embeddings to represent domain knowledge associated with each category. In the experiments with an FAQ dataset about online banking, the proposed embedding methods contributed to an improved performance of the sentence classification. In addition, the proposed model showed better performance (with an accuracy of 0.810 in the classification of 411 categories) than that of the comparison model.
10

Guo, Lei, Haoran Jiang, Xiyu Liu, and Changming Xing. "Network Embedding-Aware Point-of-Interest Recommendation in Location-Based Social Networks." Complexity 2019 (November 4, 2019): 1–18. http://dx.doi.org/10.1155/2019/3574194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As one of the important techniques to explore unknown places for users, the methods that are proposed for point-of-interest (POI) recommendation have been widely studied in recent years. Compared with traditional recommendation problems, POI recommendations are suffering from more challenges, such as the cold-start and one-class collaborative filtering problems. Many existing studies have focused on how to overcome these challenges by exploiting different types of contexts (e.g., social and geographical information). However, most of these methods only model these contexts as regularization terms, and the deep information hidden in the network structure has not been fully exploited. On the other hand, neural network-based embedding methods have shown its power in many recommendation tasks with its ability to extract high-level representations from raw data. According to the above observations, to well utilize the network information, a neural network-based embedding method (node2vec) is first exploited to learn the user and POI representations from a social network and a predefined location network, respectively. To deal with the implicit feedback, a pair-wise ranking-based method is then introduced. Finally, by regarding the pretrained network representations as the priors of the latent feature factors, an embedding-based POI recommendation method is proposed. As this method consists of an embedding model and a collaborative filtering model, when the training data are absent, the predictions will mainly be generated by the extracted embeddings. In other cases, this method will learn the user and POI factors from these two components. Experiments on two real-world datasets demonstrate the importance of the network embeddings and the effectiveness of our proposed method.
11

Nguyen, Van Quan, Tien Nguyen Anh, and Hyung-Jeong Yang. "Real-time event detection using recurrent neural network in social sensors." International Journal of Distributed Sensor Networks 15, no. 6 (June 2019): 155014771985649. http://dx.doi.org/10.1177/1550147719856492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We proposed an approach for temporal event detection using deep learning and multi-embedding on a set of text data from social media. First, a convolutional neural network augmented with multiple word-embedding architectures is used as a text classifier for the pre-processing of the input textual data. Second, an event detection model using a recurrent neural network is employed to learn time series data features by extracting temporal information. Recently, convolutional neural networks have been used in natural language processing problems and have obtained excellent results as performing on available embedding vector. In this article, word-embedding features at the embedding layer are combined and fed to convolutional neural network. The proposed method shows no size limitation, supplementation of more embeddings than standard multichannel based approaches, and obtained similar performance (accuracy score) on some benchmark data sets, especially in an imbalanced data set. For event detection, a long short-term memory network is used as a predictor that learns higher level temporal features so as to predict future values. An error distribution estimation model is built to calculate the anomaly score of observation. Events are detected using a window-based method on the anomaly scores.
12

Jadon, Anil Kumar, and Suresh Kumar. "Enhancing emotion detection with synergistic combination of word embeddings and convolutional neural networks." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 3 (September 1, 2024): 1933. http://dx.doi.org/10.11591/ijeecs.v35.i3.pp1933-1941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recognizing emotions in textual data is crucial in a wide range of natural language processing (NLP) applications, from consumer sentiment research to mental health evaluation. The word embedding techniques play a pivotal role in text processing. In this paper, the performance of several well-known word embedding methods is evaluated in the context of emotion recognition. The classification of emotions is further enhanced using a convolutional neural network (CNN) model because of its propensity to capture local patterns and its recent triumphs in text-related tasks. The integration of CNN with word embedding techniques introduced an additional layer to the landscape of emotion detection from text. The synergy between word embedding techniques and CNN harnesses the strengths of both approaches. CNNs extract local patterns and features from sequential data, making them well-suited for capturing relevant information within the embeddings. The results obtained with various embeddings highlight the significance of choosing synergistic combinations for optimum performance. The combination of CNNs and word embeddings proved a versatile and effective approach.
13

Altuntas, Volkan. "NodeVector: A Novel Network Node Vectorization with Graph Analysis and Deep Learning." Applied Sciences 14, no. 2 (January 16, 2024): 775. http://dx.doi.org/10.3390/app14020775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Network node embedding captures structural and relational information of nodes in the network and allows for us to use machine learning algorithms for various prediction tasks on network data that have an inherently complex and disordered structure. Network node embedding should preserve as much information as possible about important network properties where information is stored, such as network structure and node properties, while representing nodes as numerical vectors in a lower-dimensional space than the original higher dimensional space. Superior node embedding algorithms are a powerful tool for machine learning with effective and efficient node representation. Recent research in representation learning has led to significant advances in automating features through unsupervised learning, inspired by advances in natural language processing. Here, we seek to improve the representation quality of node embeddings with a new node vectorization technique that uses network analysis to overcome network-based information loss. In this study, we introduce the NodeVector algorithm, which combines network analysis and neural networks to transfer information from the target network to node embedding. As a proof of concept, our experiments performed on different categories of network datasets showed that our method achieves better results than its competitors for target networks. This is the first study to produce node representation by unsupervised learning using the combination of network analysis and neural networks to consider network data structure. Based on experimental results, the use of network analysis, complex initial node representation, balanced negative sampling, and neural networks has a positive effect on the representation quality of network node embedding.
14

Jbene, Mourad, Smail Tigani, Saadane Rachid, and Abdellah Chehri. "Deep Neural Network and Boosting Based Hybrid Quality Ranking for e-Commerce Product Search." Big Data and Cognitive Computing 5, no. 3 (August 13, 2021): 35. http://dx.doi.org/10.3390/bdcc5030035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the age of information overload, customers are overwhelmed with the number of products available for sale. Search engines try to overcome this issue by filtering relevant items to the users’ queries. Traditional search engines rely on the exact match of terms in the query and product meta-data. Recently, deep learning-based approaches grabbed more attention by outperforming traditional methods in many circumstances. In this work, we involve the power of embeddings to solve the challenging task of optimizing product search engines in e-commerce. This work proposes an e-commerce product search engine based on a similarity metric that works on top of query and product embeddings. Two pre-trained word embedding models were tested, the first representing a category of models that generate fixed embeddings and a second representing a newer category of models that generate context-aware embeddings. Furthermore, a re-ranking step was performed by incorporating a list of quality indicators that reflects the utility of the product to the customer as inputs to well-known ranking methods. To prove the reliability of the approach, the Amazon reviews dataset was used for experimentation. The results demonstrated the effectiveness of context-aware embeddings in retrieving relevant products and the quality indicators in ranking high-quality products.
15

Popov, Alexander. "Neural Network Models for Word Sense Disambiguation: An Overview." Cybernetics and Information Technologies 18, no. 1 (March 1, 2018): 139–51. http://dx.doi.org/10.2478/cait-2018-0012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The following article presents an overview of the use of artificial neural networks for the task of Word Sense Disambiguation (WSD). More specifically, it surveys the advances in neural language models in recent years that have resulted in methods for the effective distributed representation of linguistic units. Such representations – word embeddings, context embeddings, sense embeddings – can be effectively applied for WSD purposes, as they encode rich semantic information, especially in conjunction with recurrent neural networks, which are able to capture long-distance relations encoded in word order, syntax, information structuring.
16

Hu, Ganglin, and Jun Pang. "Relation-Aware Weighted Embedding for Heterogeneous Graphs." Information Technology and Control 52, no. 1 (March 28, 2023): 199–214. http://dx.doi.org/10.5755/j01.itc.52.1.32390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Heterogeneous graph embedding, aiming to learn the low-dimensional representations of nodes, is effective in many tasks, such as link prediction, node classification, and community detection. Most existing graph embedding methods conducted on heterogeneous graphs treat the heterogeneous neighbours equally. Although it is possible to get node weights through attention mechanisms mainly developed using expensive recursive message-passing, they are difficult to deal with large-scale networks. In this paper, we propose R-WHGE, a relation-aware weighted embedding model for heterogeneous graphs, to resolve this issue. R-WHGE comprehensively considers structural information, semantic information, meta-paths of nodes and meta-path-based node weights to learn effective node embeddings. More specifically, we first extract the feature importance of each node and then take the nodes’ importance as node weights. A weighted random walks-based embedding learning model is proposed to generate the initial weighted node embeddings according to each meta-path. Finally, we feed these embeddings to a relation-aware heterogeneous graph neural network to generate compact embeddings of nodes, which captures relation-aware characteristics. Extensive experiments on real-world datasets demonstrate that our model is competitive against various state-of-the-art methods.
17

Bui-Thi, Danh, Emmanuel Rivière, Pieter Meysman, and Kris Laukens. "Predicting compound-protein interaction using hierarchical graph convolutional networks." PLOS ONE 17, no. 7 (July 21, 2022): e0258628. http://dx.doi.org/10.1371/journal.pone.0258628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Motivation Convolutional neural networks have enabled unprecedented breakthroughs in a variety of computer vision tasks. They have also drawn much attention from other domains, including drug discovery and drug development. In this study, we develop a computational method based on convolutional neural networks to tackle a fundamental question in drug discovery and development, i.e. the prediction of compound-protein interactions based on compound structure and protein sequence. We propose a hierarchical graph convolutional network (HGCN) to encode small molecules. The HGCN aggregates a molecule embedding from substructure embeddings, which are synthesized from atom embeddings. As small molecules usually share substructures, computing a molecule embedding from those common substructures allows us to learn better generic models. We then combined the HGCN with a one-dimensional convolutional network to construct a complete model for predicting compound-protein interactions. Furthermore we apply an explanation technique, Grad-CAM, to visualize the contribution of each amino acid into the prediction. Results Experiments using different datasets show the improvement of our model compared to other GCN-based methods and a sequence based method, DeepDTA, in predicting compound-protein interactions. Each prediction made by the model is also explainable and can be used to identify critical residues mediating the interaction.
18

Wang, Bin, Yu Chen, Jinfang Sheng, and Zhengkun He. "Attributed Graph Embedding Based on Attention with Cluster." Mathematics 10, no. 23 (December 1, 2022): 4563. http://dx.doi.org/10.3390/math10234563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Graph embedding is of great significance for the research and analysis of graphs. Graph embedding aims to map nodes in the network to low-dimensional vectors while preserving information in the original graph of nodes. In recent years, the appearance of graph neural networks has significantly improved the accuracy of graph embedding. However, the influence of clusters was not considered in existing graph neural network (GNN)-based methods, so this paper proposes a new method to incorporate the influence of clusters into the generation of graph embedding. We use the attention mechanism to pass the message of the cluster pooled result and integrate the whole process into the graph autoencoder as the third layer of the encoder. The experimental results show that our model has made great improvement over the baseline methods in the node clustering and link prediction tasks, demonstrating that the embeddings generated by our model have excellent expressiveness.
19

Eyharabide, Victoria, Imad Eddine Ibrahim Bekkouch, and Nicolae Dragoș Constantin. "Knowledge Graph Embedding-Based Domain Adaptation for Musical Instrument Recognition." Computers 10, no. 8 (August 3, 2021): 94. http://dx.doi.org/10.3390/computers10080094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Convolutional neural networks raised the bar for machine learning and artificial intelligence applications, mainly due to the abundance of data and computations. However, there is not always enough data for training, especially when it comes to historical collections of cultural heritage where the original artworks have been destroyed or damaged over time. Transfer Learning and domain adaptation techniques are possible solutions to tackle the issue of data scarcity. This article presents a new method for domain adaptation based on Knowledge graph embeddings. Knowledge Graph embedding forms a projection of a knowledge graph into a lower-dimensional where entities and relations are represented into continuous vector spaces. Our method incorporates these semantic vector spaces as a key ingredient to guide the domain adaptation process. We combined knowledge graph embeddings with visual embeddings from the images and trained a neural network with the combined embeddings as anchors using an extension of Fisher’s linear discriminant. We evaluated our approach on two cultural heritage datasets of images containing medieval and renaissance musical instruments. The experimental results showed a significant increase in the baselines and state-of-the-art performance compared with other domain adaptation methods.
20

Boldakov, V. "Emotional Speech Synthesis with Emotion Embeddings." Herald of the Siberian State University of Telecommunications and Informatics, no. 4 (December 18, 2021): 23–31. http://dx.doi.org/10.55648/1998-6920-2021-15-4-23-31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Several neural network architectures provide high-quality speech synthesis. Several neural network architectures provide high-quality speech synthesis. In this article, emotional speech synthesis with global style tokens is researched. A novel method of emotional speech synthesis with emotional text embeddings is described.
21

Ota, Kosuke, Keiichiro Shirai, Hidetoshi Miyao, and Minoru Maruyama. "Multimodal Analogy-Based Image Retrieval by Improving Semantic Embeddings." Journal of Advanced Computational Intelligence and Intelligent Informatics 26, no. 6 (November 20, 2022): 995–1003. http://dx.doi.org/10.20965/jaciii.2022.p0995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this work, we study the application of multimodal analogical reasoning to image retrieval. Multimodal analogy questions are given in a form of tuples of words and images, e.g., “cat”:“dog”::[an image of a cat sitting on a bench]:?, to search for an image of a dog sitting on a bench. Retrieving desired images given these tuples can be seen as a task of finding images whose relation between the query image is close to that of query words. One way to achieve the task is building a common vector space that exhibits analogical regularities. To learn such an embedding, we propose a quadruple neural network called multimodal siamese network. The network consists of recurrent neural networks and convolutional neural networks based on the siamese architecture. We also introduce an effective procedure to generate analogy examples from an image-caption dataset for training of our network. In our experiments, we test our model on analogy-based image retrieval tasks. The results show that our method outperforms the previous work in qualitative evaluation.
22

Takase, Sho, Jun Suzuki, and Masaaki Nagata. "Character n-Gram Embeddings to Improve RNN Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5074–82. http://dx.doi.org/10.1609/aaai.v33i01.33015074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction (Wieting et al. 2016). Our proposed method constructs word embeddings from character ngram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks
23

Nguyen, Andre T., Fred Lu, Gary Lopez Munoz, Edward Raff, Charles Nicholas, and James Holt. "Out of Distribution Data Detection Using Dropout Bayesian Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7877–85. http://dx.doi.org/10.1609/aaai.v36i7.20757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We explore the utility of information contained within a dropout based Bayesian neural network (BNN) for the task of detecting out of distribution (OOD) data. We first show how previous attempts to leverage the randomized embeddings induced by the intermediate layers of a dropout BNN can fail due to the distance metric used. We introduce an alternative approach to measuring embedding uncertainty, and demonstrate how incorporating embedding uncertainty improves OOD data identification across three tasks: image classification, language classification, and malware detection.
24

P. Bhopale, Bhopale, and Ashish Tiwari. "LEVERAGING NEURAL NETWORK PHRASE EMBEDDING MODEL FOR QUERY REFORMULATION IN AD-HOC BIOMEDICAL INFORMATION RETRIEVAL." Malaysian Journal of Computer Science 34, no. 2 (April 30, 2021): 151–70. http://dx.doi.org/10.22452/mjcs.vol34no2.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study presents a spark enhanced neural network phrase embedding model to leverage query representation for relevant biomedical literature retrieval. Information retrieval for clinical decision support demands high precision. In recent years, word embeddings have been evolved as a solution to such requirements. It represents vocabulary words in low-dimensional vectors in the context of their similar words; however, it is inadequate to deal with semantic phrases or multi-word units. Learning vector embeddings for phrases by maintaining word meanings is a challenging task. This study proposes a scalable phrase embedding technique to embed multi-word units into vector representations using a state-of-the-art word embedding technique, keeping both word and phrase in the same vectors space. It will enhance the effectiveness and efficiency of query language models by expanding unseen query terms and phrases for the semantically associated query terms. Embedding vectors are evaluated via a query expansion technique for ad-hoc retrieval task over two benchmark corpora viz. TREC-CDS 2014 collection with 733,138 PubMed articles and OHSUMED corpus having 348,566 articles collected from a Medline database. The results show that the proposed technique has significantly outperformed other state-of-the-art retrieval techniques
25

Gao, Yan, Yandong Wang, Patrick Wang, and Lei Gu. "Medical Named Entity Extraction from Chinese Resident Admit Notes Using Character and Word Attention-Enhanced Neural Network." International Journal of Environmental Research and Public Health 17, no. 5 (March 2, 2020): 1614. http://dx.doi.org/10.3390/ijerph17051614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The resident admit notes (RANs) in electronic medical records (EMRs) is first-hand information to study the patient’s condition. Medical entity extraction of RANs is an important task to get disease information for medical decision-making. For Chinese electronic medical records, each medical entity contains not only word information but also rich character information. Effective combination of words and characters is very important for medical entity extraction. We propose a medical entity recognition model based on a character and word attention-enhanced (CWAE) neural network for Chinese RANs. In our model, word embeddings and character-based embeddings are obtained through character-enhanced word embedding (CWE) model and Convolutional Neural Network (CNN) model. Then attention mechanism combines the character-based embeddings and word embeddings together, which significantly improves the expression ability of words. The new word embeddings obtained by the attention mechanism are taken as the input to bidirectional long short-term memory (BI-LSTM) and conditional random field (CRF) to extract entities. We extracted nine types of key medical entities from Chinese RANs and evaluated our model. The proposed method was compared with two traditional machine learning methods CRF, support vector machine (SVM), and the related deep learning models. The result shows that our model has better performance, and the result of our model reaches 94.44% in the F1-score.
26

Ng, Michael K., Hanrui Wu, and Andy Yip. "Stability and Generalization of Hypergraph Collaborative Networks." Machine Intelligence Research 21, no. 1 (January 15, 2024): 184–96. http://dx.doi.org/10.1007/s11633-022-1397-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractGraph neural networks have been shown to be very effective in utilizing pairwise relationships across samples. Recently, there have been several successful proposals to generalize graph neural networks to hypergraph neural networks to exploit more complex relationships. In particular, the hypergraph collaborative networks yield superior results compared to other hypergraph neural networks for various semi-supervised learning tasks. The collaborative network can provide high quality vertex embeddings and hyperedge embeddings together by formulating them as a joint optimization problem and by using their consistency in reconstructing the given hypergraph. In this paper, we aim to establish the algorithmic stability of the core layer of the collaborative network and provide generalization guarantees. The analysis sheds light on the design of hypergraph filters in collaborative networks, for instance, how the data and hypergraph filters should be scaled to achieve uniform stability of the learning process. Some experimental results on real-world datasets are presented to illustrate the theory.
27

Wu, Xueyi, Yuanyuan Xu, Wenjie Zhang, and Ying Zhang. "Billion-Scale Bipartite Graph Embedding: A Global-Local Induced Approach." Proceedings of the VLDB Endowment 17, no. 2 (October 2023): 175–83. http://dx.doi.org/10.14778/3626292.3626300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Bipartite graph embedding (BGE), as the fundamental task in bipartite network analysis, is to map each node to compact low-dimensional vectors that preserve intrinsic properties. The existing solutions towards BGE fall into two groups: metric-based methods and graph neural network-based (GNN-based) methods. The latter typically generates higher-quality embeddings than the former due to the strong representation ability of deep learning. Nevertheless, none of the existing GNN-based methods can handle billion-scale bipartite graphs due to the expensive message passing or complex modelling choices. Hence, existing solutions face a challenge in achieving both embedding quality and model scalability. Motivated by this, we propose a novel graph neural network named AnchorGNN based on global-local learning framework, which can generate high-quality BGE and scale to billion-scale bipartite graphs. Concretely, AnchorGNN leverages a novel anchor-based message passing schema for global learning, which enables global knowledge to be incorporated to generate node embeddings. Meanwhile, AnchorGNN offers an efficient one-hop local structure modelling using maximum likelihood estimation for bipartite graphs with rational analysis, avoiding large adjacency matrix construction. Both global information and local structure are integrated to generate distinguishable node embeddings. Extensive experiments demonstrate that AnchorGNN outperforms the best competitor by up to 36% in accuracy and achieves up to 28 times speed-up against the only metric-based baseline on billion-scale bipartite graphs.
28

Hagad, Juan Lorenzo, Tsukasa Kimura, Ken-ichi Fukui, and Masayuki Numao. "Learning Subject-Generalized Topographical EEG Embeddings Using Deep Variational Autoencoders and Domain-Adversarial Regularization." Sensors 21, no. 5 (March 4, 2021): 1792. http://dx.doi.org/10.3390/s21051792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Two of the biggest challenges in building models for detecting emotions from electroencephalography (EEG) devices are the relatively small amount of labeled samples and the strong variability of signal feature distributions between different subjects. In this study, we propose a context-generalized model that tackles the data constraints and subject variability simultaneously using a deep neural network architecture optimized for normally distributed subject-independent feature embeddings. Variational autoencoders (VAEs) at the input level allow the lower feature layers of the model to be trained on both labeled and unlabeled samples, maximizing the use of the limited data resources. Meanwhile, variational regularization encourages the model to learn Gaussian-distributed feature embeddings, resulting in robustness to small dataset imbalances. Subject-adversarial regularization applied to the bi-lateral features further enforces subject-independence on the final feature embedding used for emotion classification. The results from subject-independent performance experiments on the SEED and DEAP EEG-emotion datasets show that our model generalizes better across subjects than other state-of-the-art feature embeddings when paired with deep learning classifiers. Furthermore, qualitative analysis of the embedding space reveals that our proposed subject-invariant bi-lateral variational domain adversarial neural network (BiVDANN) architecture may improve the subject-independent performance by discovering normally distributed features.
29

Kim, Harang, and Hyun Min Song. "Lightweight IDS Framework Using Word Embeddings for In-Vehicle Network Security." Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications 15, no. 2 (June 29, 2022): 1–13. http://dx.doi.org/10.58346/jowua.2024.i2.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As modern vehicle systems evolve into advanced cyber-physical systems, vehicle vulnerability to cyber threats has significantly increased. This paper discusses the need for advanced security in the Controller Area Network (CAN), which currently lacks security features. We propose a novel Intrusion Detection System (IDS) utilizing word embedding techniques from Natural Language Processing (NLP) for effective sequential pattern representations to improve intrusion detection in CAN traffic. This method transforms CAN identifiers into multi-dimensional vectors, enabling the model to capture complex sequential patterns of CAN traffic behaviors. Our methodology focuses on a lightweight neural network adaptable for automotive systems with limited computational resources. At first, a Word2Vec model is trained to make the embedding matrix of CAN IDs. Then, using the pre-trained embedding layer extracted from the Word2Vec network, the classifier analyzes embeddings from CAN data to detect intrusions. This model is viable for resource-constrained environments due to its low computational expense and memory usage. Key contributions of this research are (1) the application of word embeddings for intrusion detection in CAN traffic, (2) a streamlined neural network that balances accuracy with efficiency, and (3) a comprehensive evaluation showing our model’s competitive performance compared to relatively heavy deep learning models. Experimental results using the Car-Hacking dataset, widely used for automotive security research, demonstrate that our IDS effectively detects four different types of attacks on CAN. This work advances vehicle security technologies, contributing to safer transportation systems.
30

Li, Wenli, and Gang Wu. "One-shot Based Knowledge Graph Embedded Neural Architecture Search Algorithm." Frontiers in Computing and Intelligent Systems 3, no. 3 (May 4, 2023): 1–5. http://dx.doi.org/10.54097/fcis.v3i3.7982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The quality of embeddings is crucial for downstream tasks in knowledge graphs. Researchers usually introduce neural network architecture search into knowledge graph embedding for machine automatic construction of appropriate neural networks for each dataset. An existing approach is to divide the search space into macro search space and micro search space. The search strategy for micro space is based on one-shot weight sharing strategy, but it will lead to all the information obtained from the previous supernet training is discarded and the advantages of one-shot algorithm are not fully utilized. In this paper, we conduct experiments on common datasets for two important downstream tasks of knowledge graph embedding entity alignment and link prediction problems, respectively-and compare the search performance with existing manually designed neural networks as well as good neural network search algorithms. The results show that the improved algorithm can search better architectures for the same time when experiments are performed on the same dataset; the improved algorithm takes less time to search architectures with similar performance. Also, the improved algorithm searched the model on the dataset due to the human optimal level.
31

Zhang, Kainan, Zhipeng Cai, and Daehee Seo. "Privacy-Preserving Federated Graph Neural Network Learning on Non-IID Graph Data." Wireless Communications and Mobile Computing 2023 (February 3, 2023): 1–13. http://dx.doi.org/10.1155/2023/8545101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Since the concept of federated learning (FL) was proposed by Google in 2017, many applications have been combined with FL technology due to its outstanding performance in data integration, computing performance, privacy protection, etc. However, most traditional federated learning-based applications focus on image processing and natural language processing with few achievements in graph neural networks due to the graph’s nonindependent identically distributed (IID) nature. Representation learning on graph-structured data generates graph embedding, which helps machines understand graphs effectively. Meanwhile, privacy protection plays a more meaningful role in analyzing graph-structured data such as social networks. Hence, this paper proposes PPFL-GNN, a novel privacy-preserving federated graph neural network framework for node representation learning, which is a pioneer work for graph neural network-based federated learning. In PPFL-GNN, clients utilize a local graph dataset to generate graph embeddings and integrate information from other collaborative clients to utilize federated learning to produce more accurate representation results. More importantly, by integrating embedding alignment techniques in PPFL-GNN, we overcome the obstacles of federated learning on non-IID graph data and can further reduce privacy exposure by sharing preferred information.
32

Peng, Hao, Qing Ke, Ceren Budak, Daniel M. Romero, and Yong-Yeol Ahn. "Neural embeddings of scholarly periodicals reveal complex disciplinary organizations." Science Advances 7, no. 17 (April 2021): eabb9004. http://dx.doi.org/10.1126/sciadv.abb9004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Understanding the structure of knowledge domains is one of the foundational challenges in the science of science. Here, we propose a neural embedding technique that leverages the information contained in the citation network to obtain continuous vector representations of scientific periodicals. We demonstrate that our periodical embeddings encode nuanced relationships between periodicals and the complex disciplinary and interdisciplinary structure of science, allowing us to make cross-disciplinary analogies between periodicals. Furthermore, we show that the embeddings capture meaningful “axes” that encompass knowledge domains, such as an axis from “soft” to “hard” sciences or from “social” to “biological” sciences, which allow us to quantitatively ground periodicals on a given dimension. By offering novel quantification in the science of science, our framework may, in turn, facilitate the study of how knowledge is created and organized.
33

Özkaya Eren, Ayşegül, and Mustafa Sert. "Audio Captioning with Composition of Acoustic and Semantic Information." International Journal of Semantic Computing 15, no. 02 (June 2021): 143–60. http://dx.doi.org/10.1142/s1793351x21400018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Generating audio captions is a new research area that combines audio and natural language processing to create meaningful textual descriptions for audio clips. To address this problem, previous studies mostly use the encoder–decoder-based models without considering semantic information. To fill this gap, we present a novel encoder–decoder architecture using bi-directional Gated Recurrent Units (BiGRU) with audio and semantic embeddings. We extract semantic embedding by obtaining subjects and verbs from the audio clip captions and combine these embedding with audio embedding to feed the BiGRU-based encoder–decoder model. To enable semantic embeddings for the test audios, we introduce a Multilayer Perceptron classifier to predict the semantic embeddings of those clips. We also present exhaustive experiments to show the efficiency of different features and datasets for our proposed model the audio captioning task. To extract audio features, we use the log Mel energy features, VGGish embeddings, and a pretrained audio neural network (PANN) embeddings. Extensive experiments on two audio captioning datasets Clotho and AudioCaps show that our proposed model outperforms state-of-the-art audio captioning models across different evaluation metrics and using the semantic information improves the captioning performance.
34

Ye, Yutong, Xiang Lian, and Mingsong Chen. "Efficient Exact Subgraph Matching via GNN-Based Path Dominance Embedding." Proceedings of the VLDB Endowment 17, no. 7 (March 2024): 1628–41. http://dx.doi.org/10.14778/3654621.3654630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The classic problem of exact subgraph matching returns those subgraphs in a large-scale data graph that are isomorphic to a given query graph, which has gained increasing importance in many real-world applications such as social network analysis, knowledge graph discovery in the Semantic Web, bibliographical network mining, and so on. In this paper, we propose a novel and effective graph neural network (GNN)-based path embedding framework (GNN-PE), which allows efficient exact subgraph matching without introducing false dismissals. Unlike traditional GNN-based graph embeddings that only produce approximate subgraph matching results, in this paper, we carefully devise GNN-based embeddings for paths, such that: if two paths (and 1-hop neighbors of vertices on them) have the subgraph relationship, their corresponding GNN-based embedding vectors will strictly follow the dominance relationship. With such a newly designed property of path dominance embeddings, we are able to propose effective pruning strategies based on path label/dominance embeddings and guarantee no false dismissals for subgraph matching. We build multidimensional indexes over path embedding vectors, and develop an efficient subgraph matching algorithm by traversing indexes over graph partitions in parallel and applying our pruning methods. We also propose a cost-model-based query plan that obtains query paths from the query graph with low query cost. Through extensive experiments, we confirm the efficiency and effectiveness of our proposed GNN-PE approach for exact subgraph matching on both real and synthetic graph data.
35

Croce, Danilo, Daniele Rossini, and Roberto Basili. "Neural embeddings: accurate and readable inferences based on semantic kernels." Natural Language Engineering 25, no. 4 (July 2019): 519–41. http://dx.doi.org/10.1017/s1351324919000238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractSentence embeddings are the suitable input vectors for the neural learning of a number of inferences about content and meaning. Similarity estimation, classification, emotional characterization of sentences as well as pragmatic tasks, such as question answering or dialogue, have largely demonstrated the effectiveness of vector embeddings to model semantics. Unfortunately, most of the above decisions are epistemologically opaque as for the limited interpretability of the acquired neural models based on the involved embeddings. We think that any effective approach to meaning representation should be at least epistemologically coherent. In this paper, we concentrate on the readability of neural models, as a core property of any embedding technique consistent and effective in representing sentence meaning. In this perspective, this paper discusses a novel embedding technique (the Nyström methodology) that corresponds to the reconstruction of a sentence in a kernel space, inspired by rich semantic similarity metrics (a semantic kernel) rather than by a language model. In addition to being based on a kernel that captures grammatical and lexical semantic information, the proposed embedding can be used as the input vector of an effective neural learning architecture, called Kernel-based deep architectures (KDA). Finally, it also characterizes by design the KDA explanatory capability, as the proposed embedding is derived from examples that are both human readable and labeled. This property is obtained by the integration of KDAs with an explanation methodology, called layer-wise relevance propagation (LRP), already proposed in image processing. The Nyström embeddings support here the automatic compilation of argumentations in favor or against a KDA inference, in form of an explanation: each decision can in fact be linked through LRP back to the real examples, that is, the landmarks linguistically related to the input instance. The KDA network output is explained via the analogy with the activated landmarks. Quantitative evaluation of the explanations shows that richer explanations based on semantic and syntagmatic structures characterize convincing arguments, as they effectively help the user in assessing whether or not to trust the machine decisions in different tasks, for example, Question Classification or Semantic Role Labeling. This confirms the epistemological benefit that Nyström embeddings may bring, as linguistically rich and meaningful representations for a variety of inference tasks.
36

Zhou, Silin, Jing Li, Hao Wang, Shuo Shang, and Peng Han. "GRLSTM: Trajectory Similarity Computation with Graph-Based Residual LSTM." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4972–80. http://dx.doi.org/10.1609/aaai.v37i4.25624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The computation of trajectory similarity is a crucial task in many spatial data analysis applications. However, existing methods have been designed primarily for trajectories in Euclidean space, which overlooks the fact that real-world trajectories are often generated on road networks. This paper addresses this gap by proposing a novel framework, called GRLSTM (Graph-based Residual LSTM). To jointly capture the properties of trajectories and road networks, the proposed framework incorporates knowledge graph embedding (KGE), graph neural network (GNN), and the residual network into the multi-layer LSTM (Residual-LSTM). Specifically, the framework constructs a point knowledge graph to study the multi-relation of points, as points may belong to both the trajectory and the road network. KGE is introduced to learn point embeddings and relation embeddings to build the point fusion graph, while GNN is used to capture the topology structure information of the point fusion graph. Finally, Residual-LSTM is used to learn the trajectory embeddings.To further enhance the accuracy and robustness of the final trajectory embeddings, we introduce two new neighbor-based point loss functions, namely, graph-based point loss function and trajectory-based point loss function. The GRLSTM is evaluated using two real-world trajectory datasets, and the experimental results demonstrate that GRLSTM outperforms all the state-of-the-art methods significantly.
37

Tzougas, George, and Konstantin Kutzkov. "Enhancing Logistic Regression Using Neural Networks for Classification in Actuarial Learning." Algorithms 16, no. 2 (February 9, 2023): 99. http://dx.doi.org/10.3390/a16020099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks with multiple hidden layers. Furthermore, several advanced approaches were explored, including the combined actuarial neural network approach, embeddings and transfer learning. The model training was achieved by minimizing either the deviance or the cross-entropy loss functions, leading to fourteen neural network-based models in total. For illustrative purposes, logistic regression and the alternative neural network-based models we propose are employed for a binary classification exercise concerning the occurrence of at least one claim in a French motor third-party insurance portfolio. Finally, the model interpretability issue was addressed via the local interpretable model-agnostic explanations approach.
38

Chang, Zhihao, Linzhu Yu, Yanchao Xu, and Wentao Hu. "Neural Embeddings for kNN Search in Biological Sequence." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 1 (March 24, 2024): 38–45. http://dx.doi.org/10.1609/aaai.v38i1.27753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Biological sequence nearest neighbor search plays a fundamental role in bioinformatics. To alleviate the pain of quadratic complexity for conventional distance computation, neural distance embeddings, which project sequences into geometric space, have been recognized as a promising paradigm. To maintain the distance order between sequences, these models all deploy triplet loss and use intuitive methods to select a subset of triplets for training from a vast selection space. However, we observed that such training often enables models to distinguish only a fraction of distance orders, leaving others unrecognized. Moreover, naively selecting more triplets for training under the state-of-the-art network not only adds costs but also hampers model performance. In this paper, we introduce Bio-kNN: a kNN search framework for biological sequences. It includes a systematic triplet selection method and a multi-head network, enhancing the discernment of all distance orders without increasing training expenses. Initially, we propose a clustering-based approach to partition all triplets into several clusters with similar properties, and then select triplets from these clusters using an innovative strategy. Meanwhile, we noticed that simultaneously training different types of triplets in the same network cannot achieve the expected performance, thus we propose a multi-head network to tackle this. Our network employs a convolutional neural network(CNN) to extract local features shared by all clusters, and then learns a multi-layer perception(MLP) head for each cluster separately. Besides, we treat CNN as a special head, thereby integrating crucial local features which are neglected in previous models into our model for similarity recognition. Extensive experiments show that our Bio-kNN significantly outperforms the state-of-the-art methods on two large-scale datasets without increasing the training cost.
39

Xu, You-Wei, Hong-Jun Zhang, Kai Cheng, Xiang-Lin Liao, Zi-Xuan Zhang, and Yun-Bo Li. "Knowledge graph embedding with entity attributes using hypergraph neural networks." Intelligent Data Analysis 26, no. 4 (July 11, 2022): 959–75. http://dx.doi.org/10.3233/ida-216007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Knowledge graph embedding is aimed at capturing the semantic information of entities by modeling the structural information between entities. For long-tail entities which lack sufficient structural information, general knowledge graph embedding models often show relatively low performance in link prediction. In order to solve such problems, this paper proposes a general knowledge graph embedding framework to learn the structural information as well as the attribute information of the entities simultaneously. Under this framework, a H-AKRL (Hypergraph Neural Networks based Attribute-embodied Knowledge Representation Learning) model is put forward, where the hypergraph neural network is used to model the correlation between entities and attributes at a higher level. The complementary relationship between attribute information and structural information is taken full advantage of, enabling H-AKRL to finally achieve the goal of improving link prediction performance. Experiments on multiple real-world data sets show that the H-AKRL model has significantly improved the link prediction performance, especially in the embeddings of long tail entities.
40

Zhong, Fengzhe, Yan Liu, Lian Liu, Guangsheng Zhang, and Shunran Duan. "DEDGCN: Dual Evolving Dynamic Graph Convolutional Network." Security and Communication Networks 2022 (May 10, 2022): 1–11. http://dx.doi.org/10.1155/2022/6945397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the wide application of graph data in many fields, the research of graph representation learning technology has become the focus of scholars’ attention. Especially, dynamic graph representation learning is an important part of solving the problem of change graph in reality. On the one hand, most dynamic graph representation methods focus either on graph structure changes or node embedding changes, ignoring the internal relationship. On the other hand, most dynamic graph neural networks require learn node embeddings from specific tasks, resulting in poor universality of node embeddings and cannot be used in unsupervised tasks. Hence, Dual Evolving Dynamic Graph Convolutional Network (DEDGCN) was proposed to solve the above problems. DEDGCN uses the recurrent neural network to push the evolvement of GCN and nodes, from which it can extract the structural features of dynamic graph and learns the stability features of nodes, respectively, forming an adaptive dynamic graph convolution network. DEDGCN can be classified as unsupervised graph convolutional network. Thus, it is capable of training the unlabeled dynamic graph, it has more extensive application scenarios, and the calculated node embedding has strong generality. We evaluate our proposed method on experimental data in three tasks which are node classification, edge classification, and link prediction. In the classification task, facing the graph with large scale, complex connection relationship, and uncertain change rule, the F1 value of node classification task obtained by DEDGCN reaches 77%, and the F1 value of edge classification task reaches more than 90%. The results show that DEDGCN is effective in capturing graph features, and the effect of DEDGCN is much higher than other baseline methods, which proves the importance of capturing node stability features in dynamic graph representation learning. At the same time, the ability of DEDGCN in unsupervised tasks is further verified by using clustering and anomaly detection tasks, which proves that DEDGCN learning network embedding is widely used.
41

Zhang, Yuanpeng, Jingye Guan, Haobo Wang, Kaiming Li, Ying Luo, and Qun Zhang. "Generalized Zero-Shot Space Target Recognition Based on Global-Local Visual Feature Embedding Network." Remote Sensing 15, no. 21 (October 28, 2023): 5156. http://dx.doi.org/10.3390/rs15215156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Existing deep learning-based space target recognition methods rely on abundantly labeled samples and are not capable of recognizing samples from unseen classes without training. In this article, based on generalized zero-shot learning (GZSL), we propose a space target recognition framework to simultaneously recognize space targets from both seen and unseen classes. First, we defined semantic attributes to describe the characteristics of different categories of space targets. Second, we constructed a dual-branch neural network, termed the global-local visual feature embedding network (GLVFENet), which jointly learns global and local visual features to obtain discriminative feature representations, thereby achieving GZSL for space targets with higher accuracy. Specifically, the global visual feature embedding subnetwork (GVFE-Subnet) calculates the compatibility score by measuring the cosine similarity between the projection of global visual features in the semantic space and various semantic vectors, thereby obtaining global visual embeddings. The local visual feature embedding subnetwork (LVFE-Subnet) introduces soft space attention, and an encoder discovers the semantic-guided local regions in the image to then generate local visual embeddings. Finally, the visual embeddings from both branches were combined and matched with semantics. The calibrated stacking method is introduced to achieve GZSL recognition of space targets. Extensive experiments were conducted on an electromagnetic simulation dataset of nine categories of space targets, and the effectiveness of our GLVFENet is confirmed.
42

E., Koshel. "Нейронно-мережевий підхід до неперервного вкладення одновимірних потоків даних для аналізу часових рядів в реальному часі." System technologies 2, no. 151 (April 17, 2024): 92–101. http://dx.doi.org/10.34185/1562-9945-2-151-2024-08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Univariate time series analysis is a universal problem that arises in various science and engineering fields and the approaches and methods developed around this problem are diverse and numerous. These methods, however, often require the univariate data stream to be transformed into a sequence of higher-dimensional vectors (embeddings). In this article, we explore the existing embedding methods, examine their capabilities to perform in real-time, and propose a new approach that couples the classical methods with the neural network-based ones to yield results that are better in both accuracy and computational performance. Specifically, the Broomhead-King-inspired embedding algorithm implemented in a form of an autoencoder neural network is employed to produce unique and smooth representation of the input data fragments in the latent space.
43

Levy, Omer, Yoav Goldberg, and Ido Dagan. "Improving Distributional Similarity with Lessons Learned from Word Embeddings." Transactions of the Association for Computational Linguistics 3 (December 2015): 211–25. http://dx.doi.org/10.1162/tacl_a_00134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent trends suggest that neural-network-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks. We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others.
44

Wang, Yu, Ke Wang, Fengjuan Gao, and Linzhang Wang. "Learning semantic program embeddings with graph interval neural network." Proceedings of the ACM on Programming Languages 4, OOPSLA (November 13, 2020): 1–27. http://dx.doi.org/10.1145/3428205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Eliyahu Sason, Yackov Lubarsky, Alexei Gaissinski, Eli Kravchik, and Pavel Kisilev. "Oracle-based data generation for highly efficient digital twin network training." ITU Journal on Future and Evolving Technologies 4, no. 3 (September 8, 2023): 472–84. http://dx.doi.org/10.52953/aweu6345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent advances in Graph Neural Networks (GNNs) has opened new capabilities to analyze complex communication systems. However, little work has been done to study the effects of limited data samples on the performance of GNN-based systems. In this paper, we present a novel solution to the problem of finding an optimal training set for efficient training of a RouteNet-Fermi GNN model. The proposed solution ensures good model generalization to large previously unseen networks under strict limitations on the training data budget and training topology sizes. Specifically, we generate an initial data set by emulating the flow distribution of large networks while using small networks. We then deploy a new clustering method that efficiently samples the above generated data set by analyzing the data embeddings from different Oracle models. This procedure provides a very small but information-rich training set. The above data embedding method translates highly heterogeneous network samples into a common embedding spac, wherein the samples can be easily related to each other. The proposed method outperforms state-of-the-art approaches, including the winning solutions of the 2022 Graph Neural Networking challenge.
46

Hu, Shengze, Weixin Zeng, Pengfei Zhang, and Jiuyang Tang. "Neural Graph Similarity Computation with Contrastive Learning." Applied Sciences 12, no. 15 (July 29, 2022): 7668. http://dx.doi.org/10.3390/app12157668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computing the similarity between graphs is a longstanding and challenging problem with many real-world applications. Recent years have witnessed a rapid increase in neural-network-based methods, which project graphs into embedding space and devise end-to-end frameworks to learn to estimate graph similarity. Nevertheless, these solutions usually design complicated networks to capture the fine-grained interactions between graphs, and hence have low efficiency. Additionally, they rely on labeled data for training the neural networks and overlook the useful information hidden in the graphs themselves. To address the aforementioned issues, in this work, we put forward a contrastive neural graph similarity learning framework, Conga. Specifically, we utilize vanilla graph convolutional networks to generate the graph representations and capture the cross-graph interactions via a simple multilayer perceptron. We further devise an unsupervised contrastive loss to discriminate the graph embeddings and guide the training process by learning more expressive entity representations. Extensive experiment results on public datasets validate that our proposal has more robust performance and higher efficiency compared with state-of-the-art methods.
47

Sun, Xia, Ke Dong, Long Ma, Richard Sutcliffe, Feijuan He, Sushing Chen, and Jun Feng. "Drug-Drug Interaction Extraction via Recurrent Hybrid Convolutional Neural Networks with an Improved Focal Loss." Entropy 21, no. 1 (January 8, 2019): 37. http://dx.doi.org/10.3390/e21010037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Drug-drug interactions (DDIs) may bring huge health risks and dangerous effects to a patient’s body when taking two or more drugs at the same time or within a certain period of time. Therefore, the automatic extraction of unknown DDIs has great potential for the development of pharmaceutical agents and the safety of drug use. In this article, we propose a novel recurrent hybrid convolutional neural network (RHCNN) for DDI extraction from biomedical literature. In the embedding layer, the texts mentioning two entities are represented as a sequence of semantic embeddings and position embeddings. In particular, the complete semantic embedding is obtained by the information fusion between a word embedding and its contextual information which is learnt by recurrent structure. After that, the hybrid convolutional neural network is employed to learn the sentence-level features which consist of the local context features from consecutive words and the dependency features between separated words for DDI extraction. Lastly but most significantly, in order to make up for the defects of the traditional cross-entropy loss function when dealing with class imbalanced data, we apply an improved focal loss function to mitigate against this problem when using the DDIExtraction 2013 dataset. In our experiments, we achieve DDI automatic extraction with a micro F-score of 75.48% on the DDIExtraction 2013 dataset, outperforming the state-of-the-art approach by 2.49%.
48

Si, Yuqi, Jingqi Wang, Hua Xu, and Kirk Roberts. "Enhancing clinical concept extraction with contextual embeddings." Journal of the American Medical Informatics Association 26, no. 11 (July 2, 2019): 1297–304. http://dx.doi.org/10.1093/jamia/ocz096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Objective Neural network–based representations (“embeddings”) have dramatically advanced natural language processing (NLP) tasks, including clinical NLP tasks such as concept extraction. Recently, however, more advanced embedding methods and representations (eg, ELMo, BERT) have further pushed the state of the art in NLP, yet there are no common best practices for how to integrate these representations into clinical tasks. The purpose of this study, then, is to explore the space of possible options in utilizing these new models for clinical concept extraction, including comparing these to traditional word embedding methods (word2vec, GloVe, fastText). Materials and Methods Both off-the-shelf, open-domain embeddings and pretrained clinical embeddings from MIMIC-III (Medical Information Mart for Intensive Care III) are evaluated. We explore a battery of embedding methods consisting of traditional word embeddings and contextual embeddings and compare these on 4 concept extraction corpora: i2b2 2010, i2b2 2012, SemEval 2014, and SemEval 2015. We also analyze the impact of the pretraining time of a large language model like ELMo or BERT on the extraction performance. Last, we present an intuitive way to understand the semantic information encoded by contextual embeddings. Results Contextual embeddings pretrained on a large clinical corpus achieves new state-of-the-art performances across all concept extraction tasks. The best-performing model outperforms all state-of-the-art methods with respective F1-measures of 90.25, 93.18 (partial), 80.74, and 81.65. Conclusions We demonstrate the potential of contextual embeddings through the state-of-the-art performance these methods achieve on clinical concept extraction. Additionally, we demonstrate that contextual embeddings encode valuable semantic information not accounted for in traditional word representations.
49

Zhuang, Chengxu, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C. Frank, James J. DiCarlo, and Daniel L. K. Yamins. "Unsupervised neural network models of the ventral visual stream." Proceedings of the National Academy of Sciences 118, no. 3 (January 11, 2021): e2014196118. http://dx.doi.org/10.1073/pnas.2014196118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised methods requiring many more labels than are accessible to infants during development. Here, we report that recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today’s best supervised methods and that the mapping of these neural network models’ hidden layers is neuroanatomically consistent across the ventral stream. Strikingly, we find that these methods produce brain-like representations even when trained solely with real human child developmental data collected from head-mounted cameras, despite the fact that these datasets are noisy and limited. We also find that semisupervised deep contrastive embeddings can leverage small numbers of labeled examples to produce representations with substantially improved error-pattern consistency to human behavior. Taken together, these results illustrate a use of unsupervised learning to provide a quantitative model of a multiarea cortical brain system and present a strong candidate for a biologically plausible computational theory of primate sensory learning.
50

Wang, Chaoyi. "Collaborative filtering method based on graph neural network." Applied and Computational Engineering 6, no. 1 (June 14, 2023): 1288–94. http://dx.doi.org/10.54254/2755-2721/6/20230710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An essential component of contemporary computer application technology is the recommender system. The collaborative filtering is one of RS's most crucial elements. acquiring knowledge about vector representations or, the model benefits from the combination of the graph neural network and model-based collaborative filtering since it can calculate the high-order connectivity in the item-user graph and perform better overall. This connectivity successfully and explicitly introduces the collaboration signal into the embedding process. Therefore, better embeddings also imply greater performance compared to more established collaborative filtering techniques, such as matrix factorization. The neural graph collaborative filtering (NGCF) algorithm will be primarily introduced in this article. In this paper, the performance of the NGCF algorithm is verified on several data sets, and the experimental results show that there is still room for improvement in the process of practical application. For instance, the NGCF algorithm is not appropriate for processing complicated data, and user cold start is an issue. This study offers a remedy for the difficulties the NGCF algorithm ran into in real-world use. Research on how to enhance the NGCF algorithm considering the issues will continue.

To the bibliography