Literatura académica sobre el tema "Sentence Embedding Spaces"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Sentence Embedding Spaces".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Sentence Embedding Spaces"

1

Nguyen, Huy Manh, Tomo Miyazaki, Yoshihiro Sugaya y Shinichiro Omachi. "Multiple Visual-Semantic Embedding for Video Retrieval from Query Sentence". Applied Sciences 11, n.º 7 (3 de abril de 2021): 3214. http://dx.doi.org/10.3390/app11073214.

Texto completo
Resumen
Visual-semantic embedding aims to learn a joint embedding space where related video and sentence instances are located close to each other. Most existing methods put instances in a single embedding space. However, they struggle to embed instances due to the difficulty of matching visual dynamics in videos to textual features in sentences. A single space is not enough to accommodate various videos and sentences. In this paper, we propose a novel framework that maps instances into multiple individual embedding spaces so that we can capture multiple relationships between instances, leading to compelling video retrieval. We propose to produce a final similarity between instances by fusing similarities measured in each embedding space using a weighted sum strategy. We determine the weights according to a sentence. Therefore, we can flexibly emphasize an embedding space. We conducted sentence-to-video retrieval experiments on a benchmark dataset. The proposed method achieved superior performance, and the results are competitive to state-of-the-art methods. These experimental results demonstrated the effectiveness of the proposed multiple embedding approach compared to existing methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Liu, Yi, Chengyu Yin, Jingwei Li, Fang Wang y Senzhang Wang. "Predicting Dynamic User–Item Interaction with Meta-Path Guided Recursive RNN". Algorithms 15, n.º 3 (28 de febrero de 2022): 80. http://dx.doi.org/10.3390/a15030080.

Texto completo
Resumen
Accurately predicting user–item interactions is critically important in many real applications, including recommender systems and user behavior analysis in social networks. One major drawback of existing studies is that they generally directly analyze the sparse user–item interaction data without considering their semantic correlations and the structural information hidden in the data. Another limitation is that existing approaches usually embed the users and items into the different embedding spaces in a static way, but ignore the dynamic characteristics of both users and items. In this paper, we propose to learn the dynamic embedding vector trajectories rather than the static embedding vectors for users and items simultaneously. A Metapath-guided Recursive RNN based Shift embedding method named MRRNN-S is proposed to learn the continuously evolving embeddings of users and items for more accurately predicting their future interactions. The proposed MRRNN-S is extended from our previous model RRNN-S which was proposed in the earlier work. Comparedwith RRNN-S, we add the word2vec module and the skip-gram-based meta-path module to better capture the rich auxiliary information from the user–item interaction data. Specifically, we first regard the interaction data of each user with items as sentence data to model their semantic and sequential information and construct the user–item interaction graph. Then we sample the instances of meta-paths to capture the heterogeneity and structural information from the user–item interaction graph. A recursive RNN is proposed to iteratively and mutually learn the dynamic user and item embeddings in the same latent space based on their historical interactions. Next, a shift embedding module is proposed to predict the future user embeddings. To predict which item a user will interact with, we output the item embedding instead of the pairwise interaction probability between users and items, which is much more efficient. Through extensive experiments on three real-world datasets, we demonstrate that MRRNN-S achieves superior performance by extensive comparison with state-of-the-art baseline models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Qian, Chen, Fuli Feng, Lijie Wen y Tat-Seng Chua. "Conceptualized and Contextualized Gaussian Embedding". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 15 (18 de mayo de 2021): 13683–91. http://dx.doi.org/10.1609/aaai.v35i15.17613.

Texto completo
Resumen
Word embedding can represent a word as a point vector or a Gaussian distribution in high-dimensional spaces. Gaussian distribution is innately more expressive than point vector owing to the ability to additionally capture semantic uncertainties of words, and thus can express asymmetric relations among words more naturally (e.g., animal entails cat but not the reverse. However, previous Gaussian embedders neglect inner-word conceptual knowledge and lack tailored Gaussian contextualizer, leading to inferior performance on both intrinsic (context-agnostic) and extrinsic (context-sensitive) tasks. In this paper, we first propose a novel Gaussian embedder which explicitly accounts for inner-word conceptual units (sememes) to represent word semantics more precisely; during learning, we propose Gaussian Distribution Attention over Gaussian representations to adaptively aggregate multiple sememe distributions into a word distribution, which guarantees the Gaussian linear combination property. Additionally, we propose a Gaussian contextualizer to utilize outer-word contexts in a sentence, producing contextualized Gaussian representations for context-sensitive tasks. Extensive experiments on intrinsic and extrinsic tasks demonstrate the effectiveness of the proposed approach, achieving state-of-the-art performance with near 5.00% relative improvement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Cantini, Riccardo, Fabrizio Marozzo, Giovanni Bruno y Paolo Trunfio. "Learning Sentence-to-Hashtags Semantic Mapping for Hashtag Recommendation on Microblogs". ACM Transactions on Knowledge Discovery from Data 16, n.º 2 (30 de abril de 2022): 1–26. http://dx.doi.org/10.1145/3466876.

Texto completo
Resumen
The growing use of microblogging platforms is generating a huge amount of posts that need effective methods to be classified and searched. In Twitter and other social media platforms, hashtags are exploited by users to facilitate the search, categorization, and spread of posts. Choosing the appropriate hashtags for a post is not always easy for users, and therefore posts are often published without hashtags or with hashtags not well defined. To deal with this issue, we propose a new model, called HASHET ( HAshtag recommendation using Sentence-to-Hashtag Embedding Translation ), aimed at suggesting a relevant set of hashtags for a given post. HASHET is based on two independent latent spaces for embedding the text of a post and the hashtags it contains. A mapping process based on a multi-layer perceptron is then used for learning a translation from the semantic features of the text to the latent representation of its hashtags. We evaluated the effectiveness of two language representation models for sentence embedding and tested different search strategies for semantic expansion, finding out that the combined use of BERT ( Bidirectional Encoder Representation from Transformer ) and a global expansion strategy leads to the best recommendation results. HASHET has been evaluated on two real-world case studies related to the 2016 United States presidential election and COVID-19 pandemic. The results reveal the effectiveness of HASHET in predicting one or more correct hashtags, with an average F -score up to 0.82 and a recommendation hit-rate up to 0.92. Our approach has been compared to the most relevant techniques used in the literature ( generative models , unsupervised models, and attention-based supervised models ) by achieving up to 15% improvement in F -score for the hashtag recommendation task and 9% for the topic discovery task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Yachao, Runze Hu, Ronghui Li, Yanyun Qu, Yuan Xie y Xiu Li. "Cross-Modal Match for Language Conditioned 3D Object Grounding". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 7 (24 de marzo de 2024): 7359–67. http://dx.doi.org/10.1609/aaai.v38i7.28566.

Texto completo
Resumen
Language conditioned 3D object grounding aims to find the object within the 3D scene mentioned by natural language descriptions, which mainly depends on the matching between visual and natural language. Considerable improvement in grounding performance is achieved by improving the multimodal fusion mechanism or bridging the gap between detection and matching. However, several mismatches are ignored, i.e., mismatch in local visual representation and global sentence representation, and mismatch in visual space and corresponding label word space. In this paper, we propose crossmodal match for 3D grounding from mitigating these mismatches perspective. Specifically, to match local visual features with the global description sentence, we propose BEV (Bird’s-eye-view) based global information embedding module. It projects multiple object proposal features into the BEV and the relations of different objects are accessed by the visual transformer which can model both positions and features with long-range dependencies. To circumvent the mismatch in feature spaces of different modalities, we propose crossmodal consistency learning. It performs cross-modal consistency constraints to convert the visual feature space into the label word feature space resulting in easier matching. Besides, we introduce label distillation loss and global distillation loss to drive these matches learning in a distillation way. We evaluate our method in mainstream evaluation settings on three datasets, and the results demonstrate the effectiveness of the proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Dancygier, Barbara. "Mental space embeddings, counterfactuality, and the use of unless". English Language and Linguistics 6, n.º 2 (9 de octubre de 2002): 347–77. http://dx.doi.org/10.1017/s1360674302000278.

Texto completo
Resumen
Unless-constructions have often been compared with conditionals. It was noted that unless can in most cases be paraphrased with if not, but that its meaning resembles that of except if (Geis, 1973; von Fintel, 1991). Initially, it was also assumed that, unlike if-conditionals, unless-sentences with counterfactual (or irrealis) meanings are not acceptable. In recent studies by Declerck and Reed (2000, 2001), however, the acceptability of such sentences was demonstrated and a new analysis was proposed.The present article argues for an account of irrealis unless-sentences in terms of epistemic distance and mental space embeddings. First, the use of verb forms in irrealis sentences is described as an instance of the use of distanced forms, which are widely used in English to mark hypotheticality. In the second part, the theory of mental spaces is introduced and applied to show how different mental space set-ups (in conjunction with distanced forms) account for the construction of different hypothetical meanings. The so-called irrealis unless-sentences are then interpreted as a number of instances of mental space embeddings. Finally, it is shown how the account proposed explains the fact that some unless-constructions can be paraphrased only with if not while others only with except if.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Amigo, Enrique, Alejandro Ariza-Casabona, Victor Fresno y M. Antonia Marti. "Information Theory–based Compositional Distributional Semantics". Computational Linguistics 48, n.º 4 (2022): 907–48. http://dx.doi.org/10.1162/_.

Texto completo
Resumen
Abstract In the context of text representation, Compositional Distributional Semantics models aim to fuse the Distributional Hypothesis and the Principle of Compositionality. Text embedding is based on co-ocurrence distributions and the representations are in turn combined by compositional functions taking into account the text structure. However, the theoretical basis of compositional functions is still an open issue. In this article we define and study the notion of Information Theory–based Compositional Distributional Semantics (ICDS): (i) We first establish formal properties for embedding, composition, and similarity functions based on Shannon’s Information Theory; (ii) we analyze the existing approaches under this prism, checking whether or not they comply with the established desirable properties; (iii) we propose two parameterizable composition and similarity functions that generalize traditional approaches while fulfilling the formal properties; and finally (iv) we perform an empirical study on several textual similarity datasets that include sentences with a high and low lexical overlap, and on the similarity between words and their description. Our theoretical analysis and empirical results show that fulfilling formal properties affects positively the accuracy of text representation models in terms of correspondence (isometry) between the embedding and meaning spaces.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Faraz, Anum, Fardin Ahsan, Jinane Mounsef, Ioannis Karamitsos y Andreas Kanavos. "Enhancing Child Safety in Online Gaming: The Development and Application of Protectbot, an AI-Powered Chatbot Framework". Information 15, n.º 4 (19 de abril de 2024): 233. http://dx.doi.org/10.3390/info15040233.

Texto completo
Resumen
This study introduces Protectbot, an innovative chatbot framework designed to improve safety in children’s online gaming environments. At its core, Protectbot incorporates DialoGPT, a conversational Artificial Intelligence (AI) model rooted in Generative Pre-trained Transformer 2 (GPT-2) technology, engineered to simulate human-like interactions within gaming chat rooms. The framework is distinguished by a robust text classification strategy, rigorously trained on the Publicly Available Natural 2012 (PAN12) dataset, aimed at identifying and mitigating potential sexual predatory behaviors through chat conversation analysis. By utilizing fastText for word embeddings to vectorize sentences, we have refined a support vector machine (SVM) classifier, achieving remarkable performance metrics, with recall, accuracy, and F-scores approaching 0.99. These metrics not only demonstrate the classifier’s effectiveness, but also signify a significant advancement beyond existing methodologies in this field. The efficacy of our framework is additionally validated on a custom dataset, composed of 71 predatory chat logs from the Perverted Justice website, further establishing the reliability and robustness of our classifier. Protectbot represents a crucial innovation in enhancing child safety within online gaming communities, providing a proactive, AI-enhanced solution to detect and address predatory threats promptly. Our findings highlight the immense potential of AI-driven interventions to create safer digital spaces for young users.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Croce, Danilo, Giuseppe Castellucci y Roberto Basili. "Adversarial training for few-shot text classification". Intelligenza Artificiale 14, n.º 2 (11 de enero de 2021): 201–14. http://dx.doi.org/10.3233/ia-200051.

Texto completo
Resumen
In recent years, Deep Learning methods have become very popular in classification tasks for Natural Language Processing (NLP); this is mainly due to their ability to reach high performances by relying on very simple input representations, i.e., raw tokens. One of the drawbacks of deep architectures is the large amount of annotated data required for an effective training. Usually, in Machine Learning this problem is mitigated by the usage of semi-supervised methods or, more recently, by using Transfer Learning, in the context of deep architectures. One recent promising method to enable semi-supervised learning in deep architectures has been formalized within Semi-Supervised Generative Adversarial Networks (SS-GANs) in the context of Computer Vision. In this paper, we adopt the SS-GAN framework to enable semi-supervised learning in the context of NLP. We demonstrate how an SS-GAN can boost the performances of simple architectures when operating in expressive low-dimensional embeddings; these are derived by combining the unsupervised approximation of linguistic Reproducing Kernel Hilbert Spaces and the so-called Universal Sentence Encoders. We experimentally evaluate the proposed approach over a semantic classification task, i.e., Question Classification, by considering different sizes of training material and different numbers of target classes. By applying such adversarial schema to a simple Multi-Layer Perceptron, a classifier trained over a subset derived from 1% of the original training material achieves 92% of accuracy. Moreover, when considering a complex classification schema, e.g., involving 50 classes, the proposed method outperforms state-of-the-art alternatives such as BERT.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Hao, Sun, Xiaolin Qin y Xiaojing Liu. "Learning hierarchical embedding space for image-text matching". Intelligent Data Analysis, 14 de septiembre de 2023, 1–19. http://dx.doi.org/10.3233/ida-230214.

Texto completo
Resumen
There are two mainstream strategies for image-text matching at present. The one, termed as joint embedding learning, aims to model the semantic information of both image and sentence in a shared feature subspace, which facilitates the measurement of semantic similarity but only focuses on global alignment relationship. To explore the local semantic relationship more fully, the other one, termed as metric learning, aims to learn a complex similarity function to directly output score of each image-text pair. However, it significantly suffers from more computation burden at retrieval stage. In this paper, we propose a hierarchically joint embedding model to incorporate the local semantic relationship into a joint embedding learning framework. The proposed method learns the shared local and global embedding spaces simultaneously, and models the joint local embedding space with respect to specific local similarity labels which are easy to access from the lexical information of corpus. Unlike the methods based on metric learning, we can prepare the fixed representations of both images and sentences by concatenating the normalized local and global representations, which makes it feasible to perform the efficient retrieval. And experiments show that the proposed model can achieve competitive performance when compared to the existing joint embedding learning models on two publicly available datasets Flickr30k and MS-COCO.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Sentence Embedding Spaces"

1

Duquenne, Paul-Ambroise. "Sentence Embeddings for Massively Multilingual Speech and Text Processing". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS039.

Texto completo
Resumen
L'apprentissage de représentations mathématiques des phrases, sous forme textuelle, a été largement étudié en traitement automatique des langues (TAL). Alors que de nombreuses recherches ont exploré différentes fonctions d'objectif de pré-entraînement pour créer des représentations contextuelles des mots à partir des phrases, d'autres se sont concentrées sur l'apprentissage de représentations des phrases par des vecteurs uniques, ou représentations de taille fixe (par opposition à une séquence de vecteurs dont la longueur dépend de la longueur de la phrase), pour plusieurs langues. Le but étant d'encoder par des vecteurs proches entre eux les paraphrases et les traductions d'une même phrase. Dans cette thèse, nous étudions d'abord comment étendre ces espaces de représentations de phrases à la modalité de la parole afin de construire un espace de représentation de phrases multilingue pour la parole et le texte. Ensuite, nous explorons comment utiliser cet espace de représentation de phrase multilingue et multimodal pour de la recherche de similarité sémantique entre des phrases parlées et écrites à grande échelle. Ceci nous permet de créer automatiquement des alignements entre des phrases écrites et parlées dans différentes langues. Pour des seuils de similarité élevés dans l'espace de représentation, les phrases alignées peuvent être considérées comme des traductions. Si les alignements impliquent d'un côté des phrases écrites et de l'autre des phrases parlées, il s'agit alors de potentielles traductions parole-texte. Si les alignements impliquent des deux côtés des phrases parlées, il s'agit alors de potentielles traductions parole-parole. Pour valider la qualité des données collectées automatiquement, nous entraînons des modèles de traduction de la parole vers le texte et des modèles de traduction parole vers parole. Nous montrons qu'ajouter les données alignées automatiquement améliore significativement la qualité du modèle de traduction appris, démontrant la qualité des alignements et l'utilité des données automatiquement alignées. Ensuite, nous étudions comment décoder ces représentations vectorielles de phrases en texte ou parole dans différentes langues. Nous explorons plusieurs méthodes d'apprentissage de modèles décodeurs et analysons leur robustesse pour décoder des représentations de phrases de langues/modalités non observées pendant leur apprentissage, afin de quantifier leur capacité de généralisation et le transfert entre langues et entre modalités des capacités de décodage. Nous mettons en évidence que l'on peut atteindre des résultats de traduction d'une modalité à l'autre proches de systèmes appris de manière supervisée avec un mécanisme d'attention. La compatibilité des représentations parole/texte dans différentes langues permet ces très bonnes performances, malgré une représentation intermédiaire composée d'un seul vecteur. Enfin, nous montrons comment nous avons développé un nouvel espace de représentation de phrases pour la parole et le texte qui améliore l'état de l'art nommé SONAR, grâce aux enseignements tirés de nos travaux précédents. Nous étudions différentes fonctions d'objectif pour l'apprentissage de cet espace et nous analysons leur impact sur l'organisation de l'espace ainsi que sur les capacités de décodage des représentations. Nous montrons que ce nouvel espace de représentation de phrases améliore significativement l'état de l'art pour la recherche de similarité entre langues et entre modalités ainsi que les capacités de décodage de ces représentations. Ce nouvel espace couvre 200 langues écrites et 37 langues parlées. Il offre également des résultats en traduction du texte proche du système de traduction NLLB sur lequel il se base, et en traduction de la parole compétitifs avec le système supervisé Whisper. Nous présentons également SONAR EXPRESSIVE, qui introduit une représentation supplémentaire encodant des propriétés de la parole non sémantiques telles que la voix ou l'expressivité
Representation learning of sentences has been widely studied in NLP. While many works have explored different pre-training objectives to create contextual representations from sentences, several others have focused on learning sentence embeddings for multiple languages with the aim of closely encoding paraphrases and translations in the sentence embedding space.In this thesis, we first study how to extend text sentence embedding spaces to the speech modality in order to build a multilingual speech/text sentence embedding space. Next, we explore how to use this multilingual and multimodal sentence embedding space for large-scale speech mining. This allows us to automatically create alignments between written and spoken sentences in different languages. For high similarity thresholds in the latent space, aligned sentences can be considered as translations. If the alignments involve written sentences on one side and spoken sentences on the other, then these are potential speech-to-text translations. If the alignments involve on both sides spoken sentences, then these are potential speech-to-speech translations. To validate the quality of the mined data, we train speech-to-text translation models and speech-to-speech translation models. We show that adding the automatically mined data significantly improves the quality of the learned translation models, demonstrating the quality of the alignments and the usefulness of the mined data.Then, we study how to decode these sentence embeddings into text or speech in different languages. We explore several methods for training decoders and analyze their robustness to modalities/languages not seen during training, to evaluate cross-lingual and cross-modal transfers. We demonstrate that we could perform zero-shot cross-modal translation in this framework, achieving translation results close to systems learned in a supervised manner with a cross-attention mechanism. The compatibility between speech/text representations from different languages enables these very good performances, despite an intermediate fixed-size representation.Finally, we develop a new state-of-the-art massively multilingual speech/text sentence embedding space, named SONAR, based on conclusions drawn from the first two projects. We study different objective functions to learn such a space and we analyze their impact on the organization of the space as well as on the capabilities to decode these representations. We show that such sentence embedding space outperform previous state-of-the-art methods for both cross-lingual and cross-modal similarity search as well as decoding capabilities. This new space covers 200 written languages and 37 spoken languages. It also offers text translation results close to the NLLB system on which it is based, and speech translation results competitive with the Whisper supervised system. We also present SONAR EXPRESSIVE, which introduces an additional representation encoding non-semantic speech properties, such as vocal style or expressivity of speech
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Sentence Embedding Spaces"

1

Alnajjar, Khalid. "When Word Embeddings Become Endangered". En Multilingual Facilitation, 275–88. University of Helsinki, 2021. http://dx.doi.org/10.31885/9789515150257.24.

Texto completo
Resumen
Big languages such as English and Finnish have many natural language processing (NLP) resources and models, but this is not the case for low-resourced and endangered languages as such resources are so scarce despite the great advantages they would provide for the language communities. The most common types of resources available for low-resourced and endangered languages are translation dictionaries and universal dependencies. In this paper, we present a method for constructing word embeddings for endangered languages using existing word embeddings of different resource-rich languages and the translation dictionaries of resource-poor languages. Thereafter, the embeddings are fine-tuned using the sentences in the universal dependencies and aligned to match the semantic spaces of the big languages; resulting in cross-lingual embeddings. The endangered languages we work with here are Erzya, Moksha, Komi-Zyrian and Skolt Sami. Furthermore, we build a universal sentiment analysis model for all the languages that are part of this study, whether endangered or not, by utilizing cross-lingual word embeddings. The evaluation conducted shows that our word embeddings for endangered languages are well-aligned with the resource-rich languages, and they are suitable for training task-specific models as demonstrated by our sentiment analysis models which achieved high accuracies. All our cross-lingual word embeddings and sentiment analysis models will be released openly via an easy-to-use Python library.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Xiao, Qingfa, Shuangyin Li y Lei Chen. "Identical and Fraternal Twins: Fine-Grained Semantic Contrastive Learning of Sentence Representations". En Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230584.

Texto completo
Resumen
The enhancement of unsupervised learning of sentence representations has been significantly achieved by the utility of contrastive learning. This approach clusters the augmented positive instance with the anchor instance to create a desired embedding space. However, relying solely on the contrastive objective can result in sub-optimal outcomes due to its inability to differentiate subtle semantic variations between positive pairs. Specifically, common data augmentation techniques frequently introduce semantic distortion, leading to a semantic margin between the positive pair. While the InfoNCE loss function overlooks the semantic margin and prioritizes similarity maximization between positive pairs during training, leading to the insensitive semantic comprehension ability of the trained model. In this paper, we introduce a novel Identical and Fraternal Twins of Contrastive Learning (named IFTCL) framework, capable of simultaneously adapting to various positive pairs generated by different augmentation techniques. We propose a Twins Loss to preserve the innate margin during training and promote the potential of data enhancement in order to overcome the sub-optimal issue. We also present proof-of-concept experiments combined with the contrastive objective to prove the validity of the proposed Twins Loss. Furthermore, we propose a hippocampus queue mechanism to restore and reuse the negative instances without additional calculation, which further enhances the efficiency and performance of the IFCL. We verify the IFCL framework on nine semantic textual similarity tasks with both English and Chinese datasets, and the experimental results show that IFCL outperforms state-of-the-art methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Sentence Embedding Spaces"

1

Zhang, Chengkun y Junbin Gao. "Hype-HAN: Hyperbolic Hierarchical Attention Network for Semantic Embedding". En Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/552.

Texto completo
Resumen
Hyperbolic space is a well-defined space with constant negative curvature. Recent research demonstrates its odds of capturing complex hierarchical structures with its exceptional high capacity and continuous tree-like properties. This paper bridges hyperbolic space's superiority to the power-law structure of documents by introducing a hyperbolic neural network architecture named Hyperbolic Hierarchical Attention Network (Hype-HAN). Hype-HAN defines three levels of embeddings (word/sentence/document) and two layers of hyperbolic attention mechanism (word-to-sentence/sentence-to-document) on Riemannian geometries of the Lorentz model, Klein model and Poincaré model. Situated on the evolving embedding spaces, we utilize both conventional GRUs (Gated Recurrent Units) and hyperbolic GRUs with Möbius operations. Hype-HAN is applied to large scale datasets. The empirical experiments show the effectiveness of our method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wei, Liangchen y Zhi-Hong Deng. "A Variational Autoencoding Approach for Inducing Cross-lingual Word Embeddings". En Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/582.

Texto completo
Resumen
Cross-language learning allows one to use training data from one language to build models for another language. Many traditional approaches require word-level alignment sentences from parallel corpora, in this paper we define a general bilingual training objective function requiring sentence level parallel corpus only. We propose a variational autoencoding approach for training bilingual word embeddings. The variational model introduces a continuous latent variable to explicitly model the underlying semantics of the parallel sentence pairs and to guide the generation of the sentence pairs. Our model restricts the bilingual word embeddings to represent words in exactly the same continuous vector space. Empirical results on the task of cross lingual document classification has shown that our method is effective.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Xu, Linli, Wenjun Ouyang, Xiaoying Ren, Yang Wang y Liang Jiang. "Enhancing Semantic Representations of Bilingual Word Embeddings with Syntactic Dependencies". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/628.

Texto completo
Resumen
Cross-lingual representation is a technique that can both represent different languages in the same latent vector space and enable the knowledge transfer across languages. To learn such representations, most of existing works require parallel sentences with word-level alignments and assume that aligned words have similar Bag-of-Words (BoW) contexts. However, due to differences in grammar structures among different languages, the contexts of aligned words in different languages may appear at different positions of the sentence. To address this issue of different syntactics across different languages, we propose a model of bilingual word embeddings integrating syntactic dependencies (DepBiWE) by producing dependency parse-trees which encode the accurate relative positions for the contexts of aligned words. In addition, a new method is proposed to learn bilingual word embeddings from dependency-based contexts and BoW contexts jointly. Extensive experimental results on a real world dataset clearly validate the superiority of the proposed model DepBiWE on various natural language processing (NLP) tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Baumel, Tal, Raphael Cohen y Michael Elhadad. "Sentence Embedding Evaluation Using Pyramid Annotation". En Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/w16-2526.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Yi, Xiaoyuan, Zhenghao Liu, Wenhao Li y Maosong Sun. "Text Style Transfer via Learning Style Instance Supported Latent Space". En Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/526.

Texto completo
Resumen
Text style transfer pursues altering the style of a sentence while remaining its main content unchanged. Due to the lack of parallel corpora, most recent work focuses on unsupervised methods and has achieved noticeable progress. Nonetheless, the intractability of completely disentangling content from style for text leads to a contradiction of content preservation and style transfer accuracy. To address this problem, we propose a style instance supported method, StyIns. Instead of representing styles with embeddings or latent variables learned from single sentences, our model leverages the generative flow technique to extract underlying stylistic properties from multiple instances of each style, which form a more discriminative and expressive latent style space. By combining such a space with the attention-based structure, our model can better maintain the content and simultaneously achieve high transfer accuracy. Furthermore, the proposed method can be flexibly extended to semi-supervised learning so as to utilize available limited paired data. Experiments on three transfer tasks, sentiment modification, formality rephrasing, and poeticness generation, show that StyIns obtains a better balance between content and style, outperforming several recent baselines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

An, Yuan, Alexander Kalinowski y Jane Greenberg. "Clustering and Network Analysis for the Embedding Spaces of Sentences and Sub-Sentences". En 2021 Second International Conference on Intelligent Data Science Technologies and Applications (IDSTA). IEEE, 2021. http://dx.doi.org/10.1109/idsta53674.2021.9660801.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Sato, Motoki, Jun Suzuki, Hiroyuki Shindo y Yuji Matsumoto. "Interpretable Adversarial Perturbation in Input Embedding Space for Text". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/601.

Texto completo
Resumen
Following great success in the image processing field, the idea of adversarial training has been applied to tasks in the natural language processing (NLP) field. One promising approach directly applies adversarial training developed in the image processing field to the input word embedding space instead of the discrete input space of texts. However, this approach abandons such interpretability as generating adversarial texts to significantly improve the performance of NLP tasks. This paper restores interpretability to such methods by restricting the directions of perturbations toward the existing words in the input embedding space. As a result, we can straightforwardly reconstruct each input with perturbations to an actual text by considering the perturbations to be the replacement of words in the sentence while maintaining or even improving the task performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Hwang, Eugene. "Saving Endangered Languages with a Novel Three-Way Cycle Cross-Lingual Zero-Shot Sentence Alignment". En 10th International Conference on Artificial Intelligence & Applications. Academy & Industry Research Collaboration Center, 2023. http://dx.doi.org/10.5121/csit.2023.131926.

Texto completo
Resumen
Sentence classification, including sentiment analysis, hate speech detection, tagging, and urgency detection is one of the most prospective and important subjects in the Natural Language processing field. With the advent of artificial neural networks, researchers usually take advantage of models favorable for processing natural languages including RNN, LSTM and BERT. However, these models require huge amount of language corpus data to attain satisfactory accuracy. Typically this is not a big deal for researchers who are using major languages including English and Chinese because there are a myriad of other researchers and data in the Internet. However, other languages like Korean have a problem of scarcity of corpus data, and there are even more unnoticed languages in the world. One could try transfer learning for those languages but using a model trained on English corpus without any modification can be sub-optimal for other languages. This paper presents the way to align cross-lingual sentence embedding in general embedding space using additional projection layer and bilignual parallel data, which means this layer can be reused for other sentence classification tasks without further fine-tuning. To validate power of the method, further experiment was done on one of endangered languages, Jeju language. To the best of my knowledge, it is the first attempt to apply zero-shot inference on not just minor, but endangered language so far.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Li, Wenye, Jiawei Zhang, Jianjun Zhou y Laizhong Cui. "Learning Word Vectors with Linear Constraints: A Matrix Factorization Approach". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/582.

Texto completo
Resumen
Learning vector space representation of words, or word embedding, has attracted much recent research attention. With the objective of better capturing the semantic and syntactic information inherent in words, we propose two new embedding models based on the singular value decomposition of lexical co-occurrences of words. Different from previous work, our proposed models allow for injecting linear constraints when performing the decomposition, with which the desired semantic and syntactic information will be maintained in word vectors. Conceptually the models are flexible and convenient to encode prior knowledge about words. Computationally they can be easily solved by direct matrix factorization. Surprisingly simple yet effective, the proposed models have reported significantly improved performance in empirical word analogy and sentence classification evaluations, and demonstrated high potentials in practical applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Dimovski, Mladen, Claudiu Musat, Vladimir Ilievski, Andreea Hossman y Michael Baeriswyl. "Submodularity-Inspired Data Selection for Goal-Oriented Chatbot Training Based on Sentence Embeddings". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/559.

Texto completo
Resumen
Spoken language understanding (SLU) systems, such as goal-oriented chatbots or personal assistants, rely on an initial natural language understanding (NLU) module to determine the intent and to extract the relevant information from the user queries they take as input. SLU systems usually help users to solve problems in relatively narrow domains and require a large amount of in-domain training data. This leads to significant data availability issues that inhibit the development of successful systems. To alleviate this problem, we propose a technique of data selection in the low-data regime that enables us to train with fewer labeled sentences, thus smaller labelling costs. We propose a submodularity-inspired data ranking function, the ratio-penalty marginal gain, for selecting data points to label based only on the information extracted from the textual embedding space. We show that the distances in the embedding space are a viable source of information that can be used for data selection. Our method outperforms two known active learning techniques and enables cost-efficient training of the NLU unit. Moreover, our proposed selection technique does not need the model to be retrained in between the selection steps, making it time efficient as well.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía