Academic literature on the topic 'Sentence Embedding Spaces'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sentence Embedding Spaces.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Sentence Embedding Spaces"
Nguyen, Huy Manh, Tomo Miyazaki, Yoshihiro Sugaya, and Shinichiro Omachi. "Multiple Visual-Semantic Embedding for Video Retrieval from Query Sentence." Applied Sciences 11, no. 7 (April 3, 2021): 3214. http://dx.doi.org/10.3390/app11073214.
Full textLiu, Yi, Chengyu Yin, Jingwei Li, Fang Wang, and Senzhang Wang. "Predicting Dynamic User–Item Interaction with Meta-Path Guided Recursive RNN." Algorithms 15, no. 3 (February 28, 2022): 80. http://dx.doi.org/10.3390/a15030080.
Full textQian, Chen, Fuli Feng, Lijie Wen, and Tat-Seng Chua. "Conceptualized and Contextualized Gaussian Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (May 18, 2021): 13683–91. http://dx.doi.org/10.1609/aaai.v35i15.17613.
Full textCantini, Riccardo, Fabrizio Marozzo, Giovanni Bruno, and Paolo Trunfio. "Learning Sentence-to-Hashtags Semantic Mapping for Hashtag Recommendation on Microblogs." ACM Transactions on Knowledge Discovery from Data 16, no. 2 (April 30, 2022): 1–26. http://dx.doi.org/10.1145/3466876.
Full textZhang, Yachao, Runze Hu, Ronghui Li, Yanyun Qu, Yuan Xie, and Xiu Li. "Cross-Modal Match for Language Conditioned 3D Object Grounding." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (March 24, 2024): 7359–67. http://dx.doi.org/10.1609/aaai.v38i7.28566.
Full textDancygier, Barbara. "Mental space embeddings, counterfactuality, and the use of unless." English Language and Linguistics 6, no. 2 (October 9, 2002): 347–77. http://dx.doi.org/10.1017/s1360674302000278.
Full textAmigo, Enrique, Alejandro Ariza-Casabona, Victor Fresno, and M. Antonia Marti. "Information Theory–based Compositional Distributional Semantics." Computational Linguistics 48, no. 4 (2022): 907–48. http://dx.doi.org/10.1162/_.
Full textFaraz, Anum, Fardin Ahsan, Jinane Mounsef, Ioannis Karamitsos, and Andreas Kanavos. "Enhancing Child Safety in Online Gaming: The Development and Application of Protectbot, an AI-Powered Chatbot Framework." Information 15, no. 4 (April 19, 2024): 233. http://dx.doi.org/10.3390/info15040233.
Full textCroce, Danilo, Giuseppe Castellucci, and Roberto Basili. "Adversarial training for few-shot text classification." Intelligenza Artificiale 14, no. 2 (January 11, 2021): 201–14. http://dx.doi.org/10.3233/ia-200051.
Full textHao, Sun, Xiaolin Qin, and Xiaojing Liu. "Learning hierarchical embedding space for image-text matching." Intelligent Data Analysis, September 14, 2023, 1–19. http://dx.doi.org/10.3233/ida-230214.
Full textDissertations / Theses on the topic "Sentence Embedding Spaces"
Duquenne, Paul-Ambroise. "Sentence Embeddings for Massively Multilingual Speech and Text Processing." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS039.
Full textRepresentation learning of sentences has been widely studied in NLP. While many works have explored different pre-training objectives to create contextual representations from sentences, several others have focused on learning sentence embeddings for multiple languages with the aim of closely encoding paraphrases and translations in the sentence embedding space.In this thesis, we first study how to extend text sentence embedding spaces to the speech modality in order to build a multilingual speech/text sentence embedding space. Next, we explore how to use this multilingual and multimodal sentence embedding space for large-scale speech mining. This allows us to automatically create alignments between written and spoken sentences in different languages. For high similarity thresholds in the latent space, aligned sentences can be considered as translations. If the alignments involve written sentences on one side and spoken sentences on the other, then these are potential speech-to-text translations. If the alignments involve on both sides spoken sentences, then these are potential speech-to-speech translations. To validate the quality of the mined data, we train speech-to-text translation models and speech-to-speech translation models. We show that adding the automatically mined data significantly improves the quality of the learned translation models, demonstrating the quality of the alignments and the usefulness of the mined data.Then, we study how to decode these sentence embeddings into text or speech in different languages. We explore several methods for training decoders and analyze their robustness to modalities/languages not seen during training, to evaluate cross-lingual and cross-modal transfers. We demonstrate that we could perform zero-shot cross-modal translation in this framework, achieving translation results close to systems learned in a supervised manner with a cross-attention mechanism. The compatibility between speech/text representations from different languages enables these very good performances, despite an intermediate fixed-size representation.Finally, we develop a new state-of-the-art massively multilingual speech/text sentence embedding space, named SONAR, based on conclusions drawn from the first two projects. We study different objective functions to learn such a space and we analyze their impact on the organization of the space as well as on the capabilities to decode these representations. We show that such sentence embedding space outperform previous state-of-the-art methods for both cross-lingual and cross-modal similarity search as well as decoding capabilities. This new space covers 200 written languages and 37 spoken languages. It also offers text translation results close to the NLLB system on which it is based, and speech translation results competitive with the Whisper supervised system. We also present SONAR EXPRESSIVE, which introduces an additional representation encoding non-semantic speech properties, such as vocal style or expressivity of speech
Book chapters on the topic "Sentence Embedding Spaces"
Alnajjar, Khalid. "When Word Embeddings Become Endangered." In Multilingual Facilitation, 275–88. University of Helsinki, 2021. http://dx.doi.org/10.31885/9789515150257.24.
Full textXiao, Qingfa, Shuangyin Li, and Lei Chen. "Identical and Fraternal Twins: Fine-Grained Semantic Contrastive Learning of Sentence Representations." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230584.
Full textConference papers on the topic "Sentence Embedding Spaces"
Zhang, Chengkun, and Junbin Gao. "Hype-HAN: Hyperbolic Hierarchical Attention Network for Semantic Embedding." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/552.
Full textWei, Liangchen, and Zhi-Hong Deng. "A Variational Autoencoding Approach for Inducing Cross-lingual Word Embeddings." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/582.
Full textXu, Linli, Wenjun Ouyang, Xiaoying Ren, Yang Wang, and Liang Jiang. "Enhancing Semantic Representations of Bilingual Word Embeddings with Syntactic Dependencies." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/628.
Full textBaumel, Tal, Raphael Cohen, and Michael Elhadad. "Sentence Embedding Evaluation Using Pyramid Annotation." In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/w16-2526.
Full textYi, Xiaoyuan, Zhenghao Liu, Wenhao Li, and Maosong Sun. "Text Style Transfer via Learning Style Instance Supported Latent Space." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/526.
Full textAn, Yuan, Alexander Kalinowski, and Jane Greenberg. "Clustering and Network Analysis for the Embedding Spaces of Sentences and Sub-Sentences." In 2021 Second International Conference on Intelligent Data Science Technologies and Applications (IDSTA). IEEE, 2021. http://dx.doi.org/10.1109/idsta53674.2021.9660801.
Full textSato, Motoki, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. "Interpretable Adversarial Perturbation in Input Embedding Space for Text." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/601.
Full textHwang, Eugene. "Saving Endangered Languages with a Novel Three-Way Cycle Cross-Lingual Zero-Shot Sentence Alignment." In 10th International Conference on Artificial Intelligence & Applications. Academy & Industry Research Collaboration Center, 2023. http://dx.doi.org/10.5121/csit.2023.131926.
Full textLi, Wenye, Jiawei Zhang, Jianjun Zhou, and Laizhong Cui. "Learning Word Vectors with Linear Constraints: A Matrix Factorization Approach." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/582.
Full textDimovski, Mladen, Claudiu Musat, Vladimir Ilievski, Andreea Hossman, and Michael Baeriswyl. "Submodularity-Inspired Data Selection for Goal-Oriented Chatbot Training Based on Sentence Embeddings." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/559.
Full text