Academic literature on the topic 'Neural Network Embeddings'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural Network Embeddings.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neural Network Embeddings":

1

Che, Feihu, Dawei Zhang, Jianhua Tao, Mingyue Niu, and Bocheng Zhao. "ParamE: Regarding Neural Network Parameters as Relation Embeddings for Knowledge Graph Completion." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2774–81. http://dx.doi.org/10.1609/aaai.v34i03.5665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We study the task of learning entity and relation embeddings in knowledge graphs for predicting missing links. Previous translational models on link prediction make use of translational properties but lack enough expressiveness, while the convolution neural network based model (ConvE) takes advantage of the great nonlinearity fitting ability of neural networks but overlooks translational properties. In this paper, we propose a new knowledge graph embedding model called ParamE which can utilize the two advantages together. In ParamE, head entity embeddings, relation embeddings and tail entity embeddings are regarded as the input, parameters and output of a neural network respectively. Since parameters in networks are effective in converting input to output, taking neural network parameters as relation embeddings makes ParamE much more expressive and translational. In addition, the entity and relation embeddings in ParamE are from feature space and parameter space respectively, which is in line with the essence that entities and relations are supposed to be mapped into two different spaces. We evaluate the performances of ParamE on standard FB15k-237 and WN18RR datasets, and experiments show ParamE can significantly outperform existing state-of-the-art models, such as ConvE, SACN, RotatE and D4-STE/Gumbel.
2

Huang, Junjie, Huawei Shen, Liang Hou, and Xueqi Cheng. "SDGNN: Learning Node Representation for Signed Directed Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (May 18, 2021): 196–203. http://dx.doi.org/10.1609/aaai.v35i1.16093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Network embedding is aimed at mapping nodes in a network into low-dimensional vector representations. Graph Neural Networks (GNNs) have received widespread attention and lead to state-of-the-art performance in learning node representations. However, most GNNs only work in unsigned networks, where only positive links exist. It is not trivial to transfer these models to signed directed networks, which are widely observed in the real world yet less studied. In this paper, we first review two fundamental sociological theories (i.e., status theory and balance theory) and conduct empirical studies on real-world datasets to analyze the social mechanism in signed directed networks. Guided by related socio- logical theories, we propose a novel Signed Directed Graph Neural Networks model named SDGNN to learn node embeddings for signed directed networks. The proposed model simultaneously reconstructs link signs, link directions, and signed directed triangles. We validate our model’s effectiveness on five real-world datasets, which are commonly used as the benchmark for signed network embeddings. Experiments demonstrate the proposed model outperforms existing models, including feature-based methods, network embedding methods, and several GNN methods.
3

Srinidhi, K., T. L.S Tejaswi, CH Rama Rupesh Kumar, and I. Sai Siva Charan. "An Advanced Sentiment Embeddings with Applications to Sentiment Based Result Analysis." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 393. http://dx.doi.org/10.14419/ijet.v7i2.32.15721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We propose an advanced well-trained sentiment analysis based adoptive analysis “word specific embedding’s, dubbed sentiment embedding’s”. Using available word and phrase embedded learning and trained algorithms mainly make use of contexts of terms but ignore the sentiment of texts and analyzing the process of word and text classifications. sentimental analysis on unlike words conveying same meaning matched to corresponding word vector. This problem is bridged by combining encoding opinion carrying text with sentiment embeddings words. But performing sentimental analysis on e-commerce, social networking sites we developed neural network based algorithms along with tailoring and loss function which carry feelings. This research apply embedding’s to word-level, sentence-level sentimental analysis and classification, constructing sentiment oriented lexicons. Experimental analysis and results addresses that sentiment embedding techniques outperform the context-based embedding’s on many distributed data sets. This work provides familiarity about neural networks techniques for learning word embedding’s in other NLP tasks.
4

Armandpour, Mohammadreza, Patrick Ding, Jianhua Huang, and Xia Hu. "Robust Negative Sampling for Network Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3191–98. http://dx.doi.org/10.1609/aaai.v33i01.33013191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many recent network embedding algorithms use negative sampling (NS) to approximate a variant of the computationally expensive Skip-Gram neural network architecture (SGA) objective. In this paper, we provide theoretical arguments that reveal how NS can fail to properly estimate the SGA objective, and why it is not a suitable candidate for the network embedding problem as a distinct objective. We show NS can learn undesirable embeddings, as the result of the “Popular Neighbor Problem.” We use the theory to develop a new method “R-NS” that alleviates the problems of NS by using a more intelligent negative sampling scheme and careful penalization of the embeddings. R-NS is scalable to large-scale networks, and we empirically demonstrate the superiority of R-NS over NS for multi-label classification on a variety of real-world networks including social networks and language networks.
5

Kamath, S., K. G. Karibasappa, Anvitha Reddy, Arati M. Kallur, B. B. Priyanka, and B. P. Bhagya. "Improving the Relation Classification Using Convolutional Neural Network." IOP Conference Series: Materials Science and Engineering 1187, no. 1 (September 1, 2021): 012004. http://dx.doi.org/10.1088/1757-899x/1187/1/012004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Relation extraction has been the emerging research topic in the field of Natural Language Processing. The proposed work classifies the relations among the data considering the semantic relevance of words using word2vec embeddings towards training the convolutional neural network. We intended to use the semantic relevance of the words in the document to enrich the learning of the embeddings for improved classification. We designed a framework to automatically extract the relations between the entities using deep learning techniques. The framework includes pre-processing, extracting the feature vectors using word2vec embedding, and classification using convolutional neural networks. We perform extensive experimentation using benchmark datasets and show improved classification accuracy in comparison with the state-of-the-art methodologies using appropriate methods and also including the additional relations.
6

Gu, Haishuo, Jinguang Sui, and Peng Chen. "Graph Representation Learning for Street-Level Crime Prediction." ISPRS International Journal of Geo-Information 13, no. 7 (July 1, 2024): 229. http://dx.doi.org/10.3390/ijgi13070229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In contemporary research, the street network emerges as a prominent and recurring theme in crime prediction studies. Meanwhile, graph representation learning shows considerable success, which motivates us to apply the methodology to crime prediction research. In this article, a graph representation learning approach is utilized to derive topological structure embeddings within the street network. Subsequently, a heterogeneous information network that incorporates both the street network and urban facilities is constructed, and embeddings through link prediction tasks are obtained. Finally, the two types of high-order embeddings, along with other spatio-temporal features, are fed into a deep neural network for street-level crime prediction. The proposed framework is tested using data from Beijing, and the outcomes demonstrate that both types of embeddings have a positive impact on crime prediction, with the second embedding showing a more significant contribution. Comparative experiments indicate that the proposed deep neural network offers superior efficiency in crime prediction.
7

Zhang, Lei, Feng Qian, Jie Chen, and Shu Zhao. "An Unsupervised Rapid Network Alignment Framework via Network Coarsening." Mathematics 11, no. 3 (January 21, 2023): 573. http://dx.doi.org/10.3390/math11030573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Network alignment aims to identify the correspondence of nodes between two or more networks. It is the cornerstone of many network mining tasks, such as cross-platform recommendation and cross-network data aggregation. Recently, with the development of network representation learning techniques, researchers have proposed many embedding-based network alignment methods. The effect is better than traditional methods. However, several issues and challenges remain for network alignment tasks, such as lack of labeled data, mapping across network embedding spaces, and computational efficiency. Based on the graph neural network (GNN), we propose the URNA (unsupervised rapid network alignment) framework to achieve an effective balance between accuracy and efficiency. There are two phases: model training and network alignment. We exploit coarse networks to accelerate the training of GNN after first compressing the original networks into small networks. We also use parameter sharing to guarantee the consistency of embedding spaces and an unsupervised loss function to update the parameters. In the network alignment phase, we first use a once-pass forward propagation to learn node embeddings of original networks, and then we use multi-order embeddings from the outputs of all convolutional layers to calculate the similarity of nodes between the two networks via vector inner product for alignment. Experimental results on real-world datasets show that the proposed method can significantly reduce running time and memory requirements while guaranteeing alignment performance.
8

Truică, Ciprian-Octavian, Elena-Simona Apostol, Maria-Luiza Șerban, and Adrian Paschke. "Topic-Based Document-Level Sentiment Analysis Using Contextual Cues." Mathematics 9, no. 21 (October 27, 2021): 2722. http://dx.doi.org/10.3390/math9212722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Document-level Sentiment Analysis is a complex task that implies the analysis of large textual content that can incorporate multiple contradictory polarities at the phrase and word levels. Most of the current approaches either represent textual data using pre-trained word embeddings without considering the local context that can be extracted from the dataset, or they detect the overall topic polarity without considering both the local and global context. In this paper, we propose a novel document-topic embedding model, DocTopic2Vec, for document-level polarity detection in large texts by employing general and specific contextual cues obtained through the use of document embeddings (Doc2Vec) and Topic Modeling. In our approach, (1) we use a large dataset with game reviews to create different word embeddings by applying Word2Vec, FastText, and GloVe, (2) we create Doc2Vecs enriched with the local context given by the word embeddings for each review, (3) we construct topic embeddings Topic2Vec using three Topic Modeling algorithms, i.e., LDA, NMF, and LSI, to enhance the global context of the Sentiment Analysis task, (4) for each document and its dominant topic, we build the new DocTopic2Vec by concatenating the Doc2Vec with the Topic2Vec created with the same word embedding. We also design six new Convolutional-based (Bidirectional) Recurrent Deep Neural Network Architectures that show promising results for this task. The proposed DocTopic2Vecs are used to benchmark multiple Machine and Deep Learning models, i.e., a Logistic Regression model, used as a baseline, and 18 Deep Neural Networks Architectures. The experimental results show that the new embedding and the new Deep Neural Network Architectures achieve better results than the baseline, i.e., Logistic Regression and Doc2Vec.
9

Jang, Youngjin, and Harksoo Kim. "Reliable Classification of FAQs with Spelling Errors Using an Encoder-Decoder Neural Network in Korean." Applied Sciences 9, no. 22 (November 7, 2019): 4758. http://dx.doi.org/10.3390/app9224758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To resolve lexical disagreement problems between queries and frequently asked questions (FAQs), we propose a reliable sentence classification model based on an encoder-decoder neural network. The proposed model uses three types of word embeddings; fixed word embeddings for representing domain-independent meanings of words, fined-tuned word embeddings for representing domain-specific meanings of words, and character-level word embeddings for bridging lexical gaps caused by spelling errors. It also uses class embeddings to represent domain knowledge associated with each category. In the experiments with an FAQ dataset about online banking, the proposed embedding methods contributed to an improved performance of the sentence classification. In addition, the proposed model showed better performance (with an accuracy of 0.810 in the classification of 411 categories) than that of the comparison model.
10

Guo, Lei, Haoran Jiang, Xiyu Liu, and Changming Xing. "Network Embedding-Aware Point-of-Interest Recommendation in Location-Based Social Networks." Complexity 2019 (November 4, 2019): 1–18. http://dx.doi.org/10.1155/2019/3574194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As one of the important techniques to explore unknown places for users, the methods that are proposed for point-of-interest (POI) recommendation have been widely studied in recent years. Compared with traditional recommendation problems, POI recommendations are suffering from more challenges, such as the cold-start and one-class collaborative filtering problems. Many existing studies have focused on how to overcome these challenges by exploiting different types of contexts (e.g., social and geographical information). However, most of these methods only model these contexts as regularization terms, and the deep information hidden in the network structure has not been fully exploited. On the other hand, neural network-based embedding methods have shown its power in many recommendation tasks with its ability to extract high-level representations from raw data. According to the above observations, to well utilize the network information, a neural network-based embedding method (node2vec) is first exploited to learn the user and POI representations from a social network and a predefined location network, respectively. To deal with the implicit feedback, a pair-wise ranking-based method is then introduced. Finally, by regarding the pretrained network representations as the priors of the latent feature factors, an embedding-based POI recommendation method is proposed. As this method consists of an embedding model and a collaborative filtering model, when the training data are absent, the predictions will mainly be generated by the extracted embeddings. In other cases, this method will learn the user and POI factors from these two components. Experiments on two real-world datasets demonstrate the importance of the network embeddings and the effectiveness of our proposed method.

Dissertations / Theses on the topic "Neural Network Embeddings":

1

Embretsén, Niklas. "Representing Voices Using Convolutional Neural Network Embeddings." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-261415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In today’s society services centered around voices are gaining popularity. Being able to provide the users with voices they like, to obtain and sustain their attention, is of importance for enhancing the overall experience of the service. Finding an efficient way of representing voices such that similarity comparisons can be performed is therefore of great use. In the field of Natural Language Processing great progress has been made using embeddings from Deep Learning models to represent words in an unsupervised fashion. These representations managed to capture the semantics of the words. This thesis sets out to explore whether such embeddings can be found for audio data as well, more specifically voices from narrators of audiobooks, that captures similarities between different voices. For this two different Convolutional Neural Networks are developed and evaluated, trained on spectrogram representations of the voices. One is performing regular classification while the other one uses pairwise relationships and a Kullback–Leibler divergence based loss function, in an attempt to minimize and maximize the difference of the output between similar and dissimilar pairs of samples. From these models the embeddings used to represent each sample are extracted from the different layers of the fully connected part of the network during the evaluation. Both an objective and a subjective evaluation is performed. During the objective evaluation of the models it is first investigated whether the found embeddings are distinct for the different narrators, as well as if the embeddings do encode information about gender. The regular classification model is then further evaluated through a user test, as it achieved an order of magnitude better results during the objective evaluation. The user test sets out to evaluate whether the found embeddings capture information based on perceived similarity. It is concluded that the proposed approach has the potential to be used for representing voices in a way such that similarity is encoded, although more extensive testing, research and evaluation has to be performed to know for sure. For future work it is proposed to perform more sophisticated pre-proceessing of the data and also to collect and include data about relationships between voices during the training of the models.
I dagens samhälle ökar populariteten för röstbaserade tjänster. Att kunna förse användare med röster de tycker om, för att fånga och behålla deras uppmärksamhet, är därför viktigt för att förbättra användarupplevelsen. Att hitta ett effektiv sätt att representera röster, så att likheter mellan dessa kan jämföras, är därför av stor nytta. Inom fältet språkteknologi i maskininlärning har stora framstegs gjorts genom att skapa representationer av ord från de inre lagren av neurala nätverk, så kallade neurala nätverksinbäddningar. Dessa representationer har visat sig innehålla semantiken av orden. Denna uppsats avser att undersöka huruvida liknande representationer kan hittas för ljuddata i form av berättarröster från ljudböcker, där likhet mellan röster fångas upp. För att undersöka detta utvecklades och utvärderades två faltningsnätverk som använde sig av spektrogramrepresentationer av röstdata. Den ena modellen är konstruerad som en vanlig klassificeringsmodell, tränad för att skilja mellan uppläsare i datasetet. Den andra modellen använder parvisa förhållanden mellan datapunkterna och en Kullback–Leibler divergensbaserad optimeringsfunktion, med syfte att minimera och maximera skillnaden mellan lika och olika par av datapunkter. Från dessa modeller används representationer från de olika lagren av nätverket för att representera varje datapunkt under utvärderingen. Både en objektiv och subjektiv utvärderingsmetod används. Under den objektiva utvärderingen undersöks först om de funna representationerna är distinkta för olika uppläsare, sedan undersöks även om dessa fångar upp information om uppläsarens kön. Den vanliga klassificeringsmodellen utvärderas också genom ett användartest, eftersom den modellen nådde en storleksordning bättre resultat under den objektiva utvärderingen. Syftet med användartestet var att undersöka om de funna representationerna innehåller information om den upplevda likheten mellan rösterna. Slutsatsen är att det föreslagna tillvägagångssättet har potential till att användas för att representera röster så att information om likhet fångas upp, men att det krävs mer omfattande testning, undersökning och utvärdering. För framtida studier föreslås mer sofistikerad förbehandling av data samt att samla in och använda sig av data kring förhållandet mellan röster under träningen av modellerna.
2

Bopaiah, Jeevith. "A recurrent neural network architecture for biomedical event trigger classification." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/73.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A “biomedical event” is a broad term used to describe the roles and interactions between entities (such as proteins, genes and cells) in a biological system. The task of biomedical event extraction aims at identifying and extracting these events from unstructured texts. An important component in the early stage of the task is biomedical trigger classification which involves identifying and classifying words/phrases that indicate an event. In this thesis, we present our work on biomedical trigger classification developed using the multi-level event extraction dataset. We restrict the scope of our classification to 19 biomedical event types grouped under four broad categories - Anatomical, Molecular, General and Planned. While most of the existing approaches are based on traditional machine learning algorithms which require extensive feature engineering, our model relies on neural networks to implicitly learn important features directly from the text. We use natural language processing techniques to transform the text into vectorized inputs that can be used in a neural network architecture. As per our knowledge, this is the first time neural attention strategies are being explored in the area of biomedical trigger classification. Our best results were obtained from an ensemble of 50 models which produced a micro F-score of 79.82%, an improvement of 1.3% over the previous best score.
3

PALUMBO, ENRICO. "Knowledge Graph Embeddings for Recommender Systems." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2850588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pettersson, Fredrik. "Optimizing Deep Neural Networks for Classification of Short Texts." Thesis, Luleå tekniska universitet, Datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-76811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This master's thesis investigates how a state-of-the-art (SOTA) deep neural network (NN) model can be created for a specific natural language processing (NLP) dataset, the effects of using different dimensionality reduction techniques on common pre-trained word embeddings and how well this model generalize on a secondary dataset. The research is motivated by two factors. One is that the construction of a machine learning (ML) text classification (TC) model is typically done around a specific dataset and often requires a lot of manual intervention. It's therefore hard to know exactly what procedures to implement for a specific dataset and how the result will be affected. The other reason is that, if the dimensionality of pre-trained embedding vectors can be lowered without losing accuracy, and thus saving execution time, other techniques can be used during the time saved to achieve even higher accuracy. A handful of deep neural network architectures are used, namely a convolutional neural network (CNN), long short-term memory neural network (LSTM) and a bidirectional LSTM (Bi-LSTM) architecture. These deep neural network architectures are combined with four different word embeddings: GoogleNews-vectors-negative300, glove.840B.300d, paragram_300_sl999 and wiki-news-300d-1M. Three main experiments are conducted in this thesis. In the first experiment, a top-performing TC model is created for a recent NLP competition held at Kaggle.com. Each implemented procedure is benchmarked on how the accuracy and execution time of the model is affected. In the second experiment, principal component analysis (PCA) and random projection (RP) are applied to the pre-trained word embeddings used in the top-performing model to investigate how the accuracy and execution time is affected when creating lower-dimensional embedding vectors. In the third experiment, the same model is benchmarked on a separate dataset (Sentiment140) to investigate how well it generalizes on other data and how each implemented procedure affects the accuracy compared to on the original dataset. The first experiment results in a bidirectional LSTM model and a combination of the three embeddings: glove, paragram and wiki-news concatenated together. The model is able to give predictions with an F1 score of 71% which is good enough to reach 9th place out of 1,401 participating teams in the competition. In the second experiment, the execution time is improved by 13%, by using PCA, while lowering the dimensionality of the embeddings by 66% and only losing half a percent of F1 accuracy. RP gave a constant accuracy of 66-67% regardless of the projected dimensions compared to over 70% when using PCA. In the third experiment, the model gained around 12% accuracy from the initial to the final benchmarks, compared to 19% on the competition dataset. The best-achieved accuracy on the Sentiment140 dataset is 86% and thus higher than the 71% achieved on the Quora dataset.
5

Revanur, Vandan, and Ayodeji Ayibiowu. "Automatic Generation of Descriptive Features for Predicting Vehicle Faults." Thesis, Högskolan i Halmstad, CAISR Centrum för tillämpade intelligenta system (IS-lab), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-42885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Predictive Maintenance (PM) has been increasingly adopted in the Automotive industry, in the recent decades along with conventional approaches such as the Preventive Maintenance and Diagnostic/Corrective Maintenance, since it provides many advantages to estimate the failure before the actual occurrence proactively, and also being adaptive to the present status of the vehicle, in turn allowing flexible maintenance schedules for efficient repair or replacing of faulty components. PM necessitates the storage and analysis of large amounts of sensor data. This requirement can be a challenge in deploying this method on-board the vehicles due to the limited storage and computational power on the hardware of the vehicle. Hence, this thesis seeks to obtain low dimensional descriptive features from high dimensional data using Representation Learning. This low dimensional representation will be used for predicting vehicle faults, specifically Turbocharger related failures. Since the Logged Vehicle Data (LVD) was base on all the data utilized in this thesis, it allowed for the evaluation of large populations of trucks without requiring additional measuring devices and facilities. The gradual degradation methodology is considered for describing vehicle condition, which allows for modeling the malfunction/ failure as a continuous process rather than a discrete flip from healthy to an unhealthy state. This approach eliminates the challenge of data imbalance of healthy and unhealthy samples. Two important hypotheses are presented. Firstly, Parallel StackedClassical Autoencoders would produce better representations com-pared to individual Autoencoders. Secondly, employing Learned Em-beddings on Categorical Variables would improve the performance of the Dimensionality reduction. Based on these hypotheses, a model architecture is proposed and is developed on the LVD. The model is shown to achieve good performance, and in close standards to the previous state-of-the-art research. This thesis, finally, illustrates the potential to apply parallel stacked architectures with Learned Embeddings for the Categorical features, and a combination of feature selection and extraction for numerical features, to predict the Remaining Useful Life (RUL) of a vehicle, in the context of the Turbocharger. A performance improvement of 21.68% with respect to the Mean Absolute Error (MAE) loss with an 80.42% reduction in the size of data was observed.
6

Murugan, Srikala. "Determining Event Outcomes from Social Media." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1703427/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An event is something that happens at a time and location. Events include major life events such as graduating college or getting married, and also simple day-to-day activities such as commuting to work or eating lunch. Most work on event extraction detects events and the entities involved in events. For example, cooking events will usually involve a cook, some utensils and appliances, and a final product. In this work, we target the task of determining whether events result in their expected outcomes. Specifically, we target cooking and baking events, and characterize event outcomes into two categories. First, we distinguish whether something edible resulted from the event. Second, if something edible resulted, we distinguish between perfect, partial and alternative outcomes. The main contributions of this thesis are a corpus of 4,000 tweets annotated with event outcome information and experimental results showing that the task can be automated. The corpus includes tweets that have only text as well as tweets that have text and an image.
7

De, Vine Lance. "Analogical frames by constraint satisfaction." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/198036/1/Lance_De%20Vine_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research develops a new and efficient constraint satisfaction approach to the unsupervised discovery of linguistic analogies. It shows that systems of analogies can be discovered with high confidence in natural language text by a computer program without human input. The discovery of analogies is useful for many applications such as the construction of linguistic resources, natural language processing and the automation of inference and reasoning.
8

Horn, Franziska Verfasser], Klaus-Robert [Akademischer Betreuer] [Gutachter] [Müller, Alan [Gutachter] Akbik, and Ziawasch [Gutachter] Abedjan. "Similarity encoder: A neural network architecture for learning similarity preserving embeddings / Franziska Horn ; Gutachter: Klaus-Robert Müller, Alan Akbik, Ziawasch Abedjan ; Betreuer: Klaus-Robert Müller." Berlin : Technische Universität Berlin, 2020. http://d-nb.info/1210998386/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Horn, Franziska [Verfasser], Klaus-Robert [Akademischer Betreuer] [Gutachter] Müller, Alan [Gutachter] Akbik, and Ziawasch [Gutachter] Abedjan. "Similarity encoder: A neural network architecture for learning similarity preserving embeddings / Franziska Horn ; Gutachter: Klaus-Robert Müller, Alan Akbik, Ziawasch Abedjan ; Betreuer: Klaus-Robert Müller." Berlin : Technische Universität Berlin, 2020. http://d-nb.info/1210998386/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Šůstek, Martin. "Word2vec modely s přidanou kontextovou informací." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis is concerned with the explanation of the word2vec models. Even though word2vec was introduced recently (2013), many researchers have already tried to extend, understand or at least use the model because it provides surprisingly rich semantic information. This information is encoded in N-dim vector representation and can be recall by performing some operations over the algebra. As an addition, I suggest a model modifications in order to obtain different word representation. To achieve that, I use public picture datasets. This thesis also includes parts dedicated to word2vec extension based on convolution neural network.

Books on the topic "Neural Network Embeddings":

1

Unger, Herwig, and Wolfgang A. Halang, eds. Autonomous Systems 2016. VDI Verlag, 2016. http://dx.doi.org/10.51202/9783186848109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To meet the expectations raised by the terms Industrie 4.0, Industrial Internet and Internet of Things, real innovations are necessary, which can be brought about by information processing systems working autonomously. Owing to their growing complexity and their embedding in complex environments, their design becomes increasingly critical. Thus, the topics addressed in this book span from verification and validation of safety-related control software and suitable hardware designed for verifiability to be deployed in embedded systems over approaches to suppress electromagnetic interferences to strategies for network routing based on centrality measures and continuous re-authentication in peer-to-peer networks. Methods of neural and evolutionary computing are employed to aid diagnosing retinopathy of prematurity, to invert matrices and to solve non-deterministic polynomial-time hard problems. In natural language processing, interface problems between humans and machines are solved with g...

Book chapters on the topic "Neural Network Embeddings":

1

Zhang, Yuan, Jian Cao, Jue Chen, Wenyu Sun, and Yuan Wang. "Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings." In Artificial Neural Networks and Machine Learning – ICANN 2023, 411–22. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44192-9_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Markov, Ilia, Helena Gómez-Adorno, Juan-Pablo Posadas-Durán, Grigori Sidorov, and Alexander Gelbukh. "Author Profiling with Doc2vec Neural Network-Based Document Embeddings." In Advances in Soft Computing, 117–31. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-62428-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bajaj, Ahsaas, Shubham Krishna, Hemant Tiwari, and Vanraj Vala. "Learning Mobile App Embeddings Using Multi-task Neural Network." In Natural Language Processing and Information Systems, 29–40. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23281-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Röchert, Daniel, German Neubaum, and Stefan Stieglitz. "Identifying Political Sentiments on YouTube: A Systematic Comparison Regarding the Accuracy of Recurrent Neural Network and Machine Learning Models." In Disinformation in Open Online Media, 107–21. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61841-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Since social media have increasingly become forums to exchange personal opinions, more and more approaches have been suggested to analyze those sentiments automatically. Neural networks and traditional machine learning methods allow individual adaption by training the data, tailoring the algorithm to the particular topic that is discussed. Still, a great number of methodological combinations involving algorithms (e.g., recurrent neural networks (RNN)), techniques (e.g., word2vec), and methods (e.g., Skip-Gram) are possible. This work offers a systematic comparison of sentiment analytical approaches using different word embeddings with RNN architectures and traditional machine learning techniques. Using German comments of controversial political discussions on YouTube, this study uses metrics such as F1-score, precision and recall to compare the quality of performance of different approaches. First results show that deep neural networks outperform multiclass prediction with small datasets in contrast to traditional machine learning models with word embeddings.
5

Picone, Rico A. R., Dane Webb, Finbarr Obierefu, and Jotham Lentz. "New Methods for Metastimuli: Architecture, Embeddings, and Neural Network Optimization." In Augmented Cognition, 288–304. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78114-9_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Calderaro, Salvatore, Giosué Lo Bosco, Filippo Vella, and Riccardo Rizzo. "Breast Cancer Histologic Grade Identification by Graph Neural Network Embeddings." In Bioinformatics and Biomedical Engineering, 283–96. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-34960-7_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Biswas, Arijit, Mukul Bhutani, and Subhajit Sanyal. "MRNet-Product2Vec: A Multi-task Recurrent Neural Network for Product Embeddings." In Machine Learning and Knowledge Discovery in Databases, 153–65. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71273-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Salsal, Sura Khalid, and Wafaa ALhamed. "Document Retrieval in Text Archives Using Neural Network-Based Embeddings Compared to TFIDF." In Intelligent Systems and Networks, 526–37. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-2094-2_63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Molokwu, Bonaventure C., Shaon Bhatta Shuvo, Narayan C. Kar, and Ziad Kobti. "Node Classification in Complex Social Graphs via Knowledge-Graph Embeddings and Convolutional Neural Network." In Lecture Notes in Computer Science, 183–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50433-5_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Barbaglia, Luca, Sergio Consoli, and Sebastiano Manzan. "Exploring the Predictive Power of News and Neural Machine Learning Models for Economic Forecasting." In Mining Data for Financial Applications, 135–49. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66981-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractForecasting economic and financial variables is a challenging task for several reasons, such as the low signal-to-noise ratio, regime changes, and the effect of volatility among others. A recent trend is to extract information from news as an additional source to forecast economic activity and financial variables. The goal is to evaluate if news can improve forecasts from standard methods that usually are not well-specified and have poor out-of-sample performance. In a currently on-going project, our goal is to combine a richer information set that includes news with a state-of-the-art machine learning model. In particular, we leverage on two recent advances in Data Science, specifically on Word Embedding and Deep Learning models, which have recently attracted extensive attention in many scientific fields. We believe that by combining the two methodologies, effective solutions can be built to improve the prediction accuracy for economic and financial time series. In this preliminary contribution, we provide an overview of the methodology under development and some initial empirical findings. The forecasting model is based on DeepAR, an auto-regressive probabilistic Recurrent Neural Network model, that is combined with GloVe Word Embeddings extracted from economic news. The target variable is the spread between the US 10-Year Treasury Constant Maturity and the 3-Month Treasury Constant Maturity (T10Y3M). The DeepAR model is trained on a large number of related GloVe Word Embedding time series, and employed to produce point and density forecasts.

Conference papers on the topic "Neural Network Embeddings":

1

Luo, Dixin, Haoran Cheng, Qingbin Li, and Hongteng Xu. "Coupled Point Process-based Sequence Modeling for Privacy-preserving Network Alignment." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Network alignment aims at finding the correspondence of nodes across different networks, which is significant for many applications, e.g., fraud detection and crime network tracing across platforms. In practice, however, accessing the topological information of different networks is often restricted and even forbidden, considering privacy and security issues. Instead, what we observed might be the event sequences of the networks' nodes in the continuous-time domain. In this study, we develop a coupled neural point process-based (CPP) sequence modeling strategy, which provides a solution to privacy-preserving network alignment based on the event sequences. Our CPP consists of a coupled node embedding layer and a neural point process module. The coupled node embedding layer embeds one network's nodes and explicitly models the alignment matrix between the two networks. Accordingly, it parameterizes the node embeddings of the other network by the push-forward operation. Given the node embeddings, the neural point process module jointly captures the dynamics of the two networks' event sequences. We learn the CPP model in a maximum likelihood estimation framework with an inverse optimal transport (IOT) regularizer. Experiments show that our CPP is compatible with various point process backbones and is robust to the model misspecification issue, which achieves encouraging performance on network alignment. The code is available at https://github.com/Dixin-s-Lab/CNPP.
2

Dong, Yuxiao, Ziniu Hu, Kuansan Wang, Yizhou Sun, and Jie Tang. "Heterogeneous Network Representation Learning." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Representation learning has offered a revolutionary learning paradigm for various AI domains. In this survey, we examine and review the problem of representation learning with the focus on heterogeneous networks, which consists of different types of vertices and relations. The goal of this problem is to automatically project objects, most commonly, vertices, in an input heterogeneous network into a latent embedding space such that both the structural and relational properties of the network can be encoded and preserved. The embeddings (representations) can be then used as the features to machine learning algorithms for addressing corresponding network tasks. To learn expressive embeddings, current research developments can fall into two major categories: shallow embedding learning and graph neural networks. After a thorough review of the existing literature, we identify several critical challenges that remain unaddressed and discuss future directions. Finally, we build the Heterogeneous Graph Benchmark to facilitate open research for this rapidly-developing topic.
3

Liu, Bing, Wei Luo, Gang Li, Jing Huang, and Bo Yang. "Do We Need an Encoder-Decoder to Model Dynamical Systems on Networks?" In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As deep learning gains popularity in modelling dynamical systems, we expose an underappreciated misunderstanding relevant to modelling dynamics on networks. Strongly influenced by graph neural networks, latent vertex embeddings are naturally adopted in many neural dynamical network models. However, we show that embeddings tend to induce a model that fits observations well but simultaneously has incorrect dynamical behaviours. Recognising that previous studies narrowly focus on short-term predictions during the transient phase of a flow, we propose three tests for correct long-term behaviour, and illustrate how an embedding-based dynamical model fails these tests, and analyse the causes, particularly through the lens of topological conjugacy. In doing so, we show that the difficulties can be avoided by not using embedding. We propose a simple embedding-free alternative based on parametrising two additive vector-field components. Through extensive experiments, we verify that the proposed model can reliably recover a broad class of dynamics on different network topologies from time series data.
4

Aspis, Yaniv, Krysia Broda, Jorge Lobo, and Alessandra Russo. "Embed2Sym - Scalable Neuro-Symbolic Reasoning via Clustered Embeddings." In 19th International Conference on Principles of Knowledge Representation and Reasoning {KR-2022}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/kr.2022/44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Neuro-symbolic reasoning approaches proposed in recent years combine a neural perception component with a symbolic reasoning component to solve a downstream task. By doing so, these approaches can provide neural networks with symbolic reasoning capabilities, improve their interpretability and enable generalization beyond the training task. However, this often comes at the cost of poor training time, with potential scalability issues. In this paper, we propose a scalable neuro-symbolic approach, called Embed2Sym. We complement a two-stage (perception and reasoning) neural network architecture designed to solve a downstream task end-to-end with a symbolic optimisation method for extracting learned latent concepts. Specifically, the trained perception network generates clusters in embedding space that are identified and labelled using symbolic knowledge and a symbolic solver. With the latent concepts identified, a neuro-symbolic model is constructed by combining the perception network with the symbolic knowledge of the downstream task, resulting in a model that is interpretable and transferable. Our evaluation shows that Embed2Sym outperforms state-of-the-art neuro-symbolic systems on benchmark tasks in terms of training time by several orders of magnitude while providing similar if not better accuracy.
5

Garcia-Romero, Daniel, David Snyder, Gregory Sell, Daniel Povey, and Alan McCree. "Speaker diarization using deep neural network embeddings." In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7953094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hamaguchi, Takuo, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. "Knowledge Transfer for Out-of-Knowledge-Base Entities : A Graph Neural Network Approach." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Knowledge base completion (KBC) aims to predict missing information in a knowledge base. In this paper, we address the out-of-knowledge-base (OOKB) entity problem in KBC: how to answer queries concerning test entities not observed at training time. Existing embedding-based KBC models assume that all test entities are available at training time, making it unclear how to obtain embeddings for new entities without costly retraining. To solve the OOKB entity problem without retraining, we use graph neural networks (Graph-NNs) to compute the embeddings of OOKB entities, exploiting the limited auxiliary knowledge provided at test time. The experimental results show the effectiveness of our proposed model in the OOKB setting. Additionally, in the standard KBC setting in which OOKB entities are not involved, our model achieves state-of-the-art performance on the WordNet dataset.
7

Cheng, Weiyu, Yanyan Shen, Yanmin Zhu, and Linpeng Huang. "DELF: A Dual-Embedding based Deep Latent Factor Model for Recommendation." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Among various recommendation methods, latent factor models are usually considered to be state-of-the-art techniques, which aim to learn user and item embeddings for predicting user-item preferences. When applying latent factor models to recommendation with implicit feedback, the quality of embeddings always suffers from inadequate positive feedback and noisy negative feedback. Inspired by the idea of NSVD that represents users based on their interacted items, this paper proposes a dual-embedding based deep latent factor model named DELF for recommendation with implicit feedback. In addition to learning a single embedding for a user (resp. item), we represent each user (resp. item) with an additional embedding from the perspective of the interacted items (resp. users). We employ an attentive neural method to discriminate the importance of interacted users/items for dual-embedding learning. We further introduce a neural network architecture to incorporate dual embeddings for recommendation. A novel attempt of DELF is to model each user-item interaction with four deep representations that are subtly fused for preference prediction. We conducted extensive experiments on real-world datasets. The results verify the effectiveness of user/item dual embeddings and the superior performance of DELF on item recommendation.
8

Romero, Hector E., Ning Ma, and Guy J. Brown. "Snorer Diarisation Based On Deep Neural Network Embeddings." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Snyder, David, Daniel Garcia-Romero, Daniel Povey, and Sanjeev Khudanpur. "Deep Neural Network Embeddings for Text-Independent Speaker Verification." In Interspeech 2017. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/interspeech.2017-620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Settle, Shane, and Karen Livescu. "Discriminative acoustic word embeddings: Tecurrent neural network-based approaches." In 2016 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2016. http://dx.doi.org/10.1109/slt.2016.7846310.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography