Articles de revues sur le sujet « Embedding space »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Embedding space.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Embedding space ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Takehara, Daisuke, et Kei Kobayashi. « Representing Hierarchical Structured Data Using Cone Embedding ». Mathematics 11, no 10 (15 mai 2023) : 2294. http://dx.doi.org/10.3390/math11102294.

Texte intégral
Résumé :
Extracting hierarchical structure in graph data is becoming an important problem in fields such as natural language processing and developmental biology. Hierarchical structures can be extracted by embedding methods in non-Euclidean spaces, such as Poincaré embedding and Lorentz embedding, and it is now possible to learn efficient embedding by taking advantage of the structure of these spaces. In this study, we propose embedding into another type of metric space called a metric cone by learning an only one-dimensional coordinate variable added to the original vector space or a pre-trained embedding space. This allows for the extraction of hierarchical information while maintaining the properties of the pre-trained embedding. The metric cone is a one-dimensional extension of the original metric space and has the advantage that the curvature of the space can be easily adjusted by a parameter even when the coordinates of the original space are fixed. Through an extensive empirical evaluation we have corroborated the effectiveness of the proposed cone embedding model. In the case of randomly generated trees, cone embedding demonstrated superior performance in extracting hierarchical structures compared to existing techniques, particularly in high-dimensional settings. For WordNet embeddings, cone embedding exhibited a noteworthy correlation between the extracted hierarchical structures and human evaluation outcomes.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Samko, Natasha. « Embeddings of weighted generalized Morrey spaces into Lebesgue spaces on fractal sets ». Fractional Calculus and Applied Analysis 22, no 5 (25 octobre 2019) : 1203–24. http://dx.doi.org/10.1515/fca-2019-0064.

Texte intégral
Résumé :
Abstract We study embeddings of weighted local and consequently global generalized Morrey spaces defined on a quasi-metric measure set (X, d, μ) of general nature which may be unbounded, into Lebesgue spaces Ls(X), 1 ≤ s ≤ p < ∞. The main motivation for obtaining such an embedding is to have an embedding of non-separable Morrey space into a separable space. In the general setting of quasi-metric measure spaces and arbitrary weights we give a sufficient condition for such an embedding. In the case of radial weights related to the center of local Morrey space, we obtain an effective sufficient condition in terms of (fractional in general) upper Ahlfors dimensions of the set X. In the case of radial weights we also obtain necessary conditions for such embeddings of local and global Morrey spaces, with the use of (fractional in general) lower and upper Ahlfors dimensions. In the case of power-logarithmic-type weights we obtain a criterion for such embeddings when these dimensions coincide.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Paston, Sergey, et Taisiia Zaitseva. « Nontrivial Isometric Embeddings for Flat Spaces ». Universe 7, no 12 (4 décembre 2021) : 477. http://dx.doi.org/10.3390/universe7120477.

Texte intégral
Résumé :
Nontrivial isometric embeddings for flat metrics (i.e., those which are not just planes in the ambient space) can serve as useful tools in the description of gravity in the embedding gravity approach. Such embeddings can additionally be required to have the same symmetry as the metric. On the other hand, it is possible to require the embedding to be unfolded so that the surface in the ambient space would occupy the subspace of the maximum possible dimension. In the weak gravitational field limit, such a requirement together with a large enough dimension of the ambient space makes embedding gravity equivalent to general relativity, while at lower dimensions it guarantees the linearizability of the equations of motion. We discuss symmetric embeddings for the metrics of flat Euclidean three-dimensional space and Minkowski space. We propose the method of sequential surface deformations for the construction of unfolded embeddings. We use it to construct such embeddings of flat Euclidean three-dimensional space and Minkowski space, which can be used to analyze the equations of motion of embedding gravity.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ravindran, Renjith P., et Kavi Narayana Murthy. « Syntactic Coherence in Word Embedding Spaces ». International Journal of Semantic Computing 15, no 02 (juin 2021) : 263–90. http://dx.doi.org/10.1142/s1793351x21500057.

Texte intégral
Résumé :
Word embeddings have recently become a vital part of many Natural Language Processing (NLP) systems. Word embeddings are a suite of techniques that represent words in a language as vectors in an n-dimensional real space that has been shown to encode a significant amount of syntactic and semantic information. When used in NLP systems, these representations have resulted in improved performance across a wide range of NLP tasks. However, it is not clear how syntactic properties interact with the more widely studied semantic properties of words. Or what the main factors in the modeling formulation are that encourages embedding spaces to pick up more of syntactic behavior as opposed to semantic behavior of words. We investigate several aspects of word embedding spaces and modeling assumptions that maximize syntactic coherence — the degree to which words with similar syntactic properties form distinct neighborhoods in the embedding space. We do so in order to understand which of the existing models maximize syntactic coherence making it a more reliable source for extracting syntactic category (POS) information. Our analysis shows that syntactic coherence of S-CODE is superior to the other more popular and more recent embedding techniques such as Word2vec, fastText, GloVe and LexVec, when measured under compatible parameter settings. Our investigation also gives deeper insights into the geometry of the embedding space with respect to syntactic coherence, and how this is influenced by context size, frequency of words, and dimensionality of the embedding space.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Li, Pandeng, Yan Li, Hongtao Xie et Lei Zhang. « Neighborhood-Adaptive Structure Augmented Metric Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 2 (28 juin 2022) : 1367–75. http://dx.doi.org/10.1609/aaai.v36i2.20025.

Texte intégral
Résumé :
Most metric learning techniques typically focus on sample embedding learning, while implicitly assume a homogeneous local neighborhood around each sample, based on the metrics used in training ( e.g., hypersphere for Euclidean distance or unit hyperspherical crown for cosine distance). As real-world data often lies on a low-dimensional manifold curved in a high-dimensional space, it is unlikely that everywhere of the manifold shares the same local structures in the input space. Besides, considering the non-linearity of neural networks, the local structure in the output embedding space may not be homogeneous as assumed. Therefore, representing each sample simply with its embedding while ignoring its individual neighborhood structure would have limitations in Embedding-Based Retrieval (EBR). By exploiting the heterogeneity of local structures in the embedding space, we propose a Neighborhood-Adaptive Structure Augmented metric learning framework (NASA), where the neighborhood structure is realized as a structure embedding, and learned along with the sample embedding in a self-supervised manner. In this way, without any modifications, most indexing techniques can be used to support large-scale EBR with NASA embeddings. Experiments on six standard benchmarks with two kinds of embeddings, i.e., binary embeddings and real-valued embeddings, show that our method significantly improves and outperforms the state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Bhowmik, Kowshik, et Anca Ralescu. « Clustering of Monolingual Embedding Spaces ». Digital 3, no 1 (23 février 2023) : 48–66. http://dx.doi.org/10.3390/digital3010004.

Texte intégral
Résumé :
Suboptimal performance of cross-lingual word embeddings for distant and low-resource languages calls into question the isomorphic assumption integral to the mapping-based methods of obtaining such embeddings. This paper investigates the comparative impact of typological relationship and corpus size on the isomorphism between monolingual embedding spaces. To that end, two clustering algorithms were applied to three sets of pairwise degrees of isomorphisms. It is also the goal of the paper to determine the combination of the isomorphism measure and clustering algorithm that best captures the typological relationship among the chosen set of languages. Of the three measures investigated, Relational Similarity seemed to capture best the typological information of the languages encoded in their respective embedding spaces. These language clusters can help us identify, without any pre-existing knowledge about the real-world linguistic relationships shared among a group of languages, the related higher-resource languages of low-resource languages. The presence of such languages in the cross-lingual embedding space can help improve the performance of low-resource languages in a cross-lingual embedding space.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Hawley, Scott H., Zach Evans et Joe Baldridge. « Audio (vector) algebra : Vector space operations on neural audio embeddings ». Journal of the Acoustical Society of America 152, no 4 (octobre 2022) : A178. http://dx.doi.org/10.1121/10.0015957.

Texte intégral
Résumé :
Ever since the work of Castellon, Donahue, and Liang (ISMIR 2021) showed that latent space “embedding” representations encoded by OpenAI's Jukebox model contain semantically meaningful information about the music, many have wondered whether such embeddings support vector relations akin to the famous “king—man + woman = queen” result seen in word vector embeddings. Such an “audio (vector) algebra” would provide a way to perform operations on the audio by displacing the embeddings in certain directions, and then decoding them to new sounds. The nonlinear aspects of the encoding process suggest that this may not be possible in general, however, for certain kinds of operations in finite regions of embedding spaces, such embedding vector transformations may indeed have musically relevant counterparts. In this talk we investigate the feasibility of such schemes for the cases of mixing and audio effects.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Hashimoto, Tatsunori B., David Alvarez-Melis et Tommi S. Jaakkola. « Word Embeddings as Metric Recovery in Semantic Spaces ». Transactions of the Association for Computational Linguistics 4 (décembre 2016) : 273–86. http://dx.doi.org/10.1162/tacl_a_00098.

Texte intégral
Résumé :
Continuous word representations have been remarkably useful across NLP tasks but remain poorly understood. We ground word embeddings in semantic spaces studied in the cognitive-psychometric literature, taking these spaces as the primary objects to recover. To this end, we relate log co-occurrences of words in large corpora to semantic similarity assessments and show that co-occurrences are indeed consistent with an Euclidean semantic space hypothesis. Framing word embedding as metric recovery of a semantic space unifies existing word embedding algorithms, ties them to manifold learning, and demonstrates that existing algorithms are consistent metric recovery methods given co-occurrence counts from random walks. Furthermore, we propose a simple, principled, direct metric recovery algorithm that performs on par with the state-of-the-art word embedding and manifold learning methods. Finally, we complement recent focus on analogies by constructing two new inductive reasoning datasets—series completion and classification—and demonstrate that word embeddings can be used to solve them as well.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Marinari, Maria Grazia, et Mario Raimondo. « On Complete Intersections Over an Algebraically Non-Closed Field ». Canadian Mathematical Bulletin 29, no 2 (1 juin 1986) : 140–45. http://dx.doi.org/10.4153/cmb-1986-024-0.

Texte intégral
Résumé :
AbstractWe give a criterion in order that an affine variety defined over any field has a complete intersection (ci.) embedding into some affine space. Moreover we give an example of a smooth real curve C all of whose embeddings into affine spaces are c.i.; nevertheless it has an embedding into ℝ3 which cannot be realized as a c.i. by polynomials.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Minemyer, Barry. « Isometric embeddings of polyhedra into Euclidean space ». Journal of Topology and Analysis 07, no 04 (22 septembre 2015) : 677–92. http://dx.doi.org/10.1142/s179352531550020x.

Texte intégral
Résumé :
In this paper we consider piecewise linear (pl) isometric embeddings of Euclidean polyhedra into Euclidean space. A Euclidean polyhedron is just a metric space [Formula: see text] which admits a triangulation [Formula: see text] such that each n-dimensional simplex of [Formula: see text] is affinely isometric to a simplex in 𝔼n. We prove that any 1-Lipschitz map from an n-dimensional Euclidean polyhedron [Formula: see text] into 𝔼3n is ϵ-close to a pl isometric embedding for any ϵ > 0. If we remove the condition that the map be pl, then any 1-Lipschitz map into 𝔼2n + 1 can be approximated by a (continuous) isometric embedding. These results are extended to isometric embedding theorems of spherical and hyperbolic polyhedra into Euclidean space by the use of the Nash–Kuiper C1 isometric embedding theorem ([9] and [13]).
Styles APA, Harvard, Vancouver, ISO, etc.
11

Schick, Timo, et Hinrich Schütze. « Learning Semantic Representations for Novel Words : Leveraging Both Form and Context ». Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 juillet 2019) : 6965–73. http://dx.doi.org/10.1609/aaai.v33i01.33016965.

Texte intégral
Résumé :
Word embeddings are a key component of high-performing natural language processing (NLP) systems, but it remains a challenge to learn good representations for novel words on the fly, i.e., for words that did not occur in the training data. The general problem setting is that word embeddings are induced on an unlabeled training corpus and then a model is trained that embeds novel words into this induced embedding space. Currently, two approaches for learning embeddings of novel words exist: (i) learning an embedding from the novel word’s surface-form (e.g., subword n-grams) and (ii) learning an embedding from the context in which it occurs. In this paper, we propose an architecture that leverages both sources of information – surface-form and context – and show that it results in large increases in embedding quality. Our architecture obtains state-of-the-art results on the Definitional Nonce and Contextual Rare Words datasets. As input, we only require an embedding set and an unlabeled corpus for training our architecture to produce embeddings appropriate for the induced embedding space. Thus, our model can easily be integrated into any existing NLP system and enhance its capability to handle novel words.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Tan, Zhen, Xiang Zhao, Yang Fang, Bin Ge et Weidong Xiao. « Knowledge Graph Representation via Similarity-Based Embedding ». Scientific Programming 2018 (15 juillet 2018) : 1–12. http://dx.doi.org/10.1155/2018/6325635.

Texte intégral
Résumé :
Knowledge graph, a typical multi-relational structure, includes large-scale facts of the world, yet it is still far away from completeness. Knowledge graph embedding, as a representation method, constructs a low-dimensional and continuous space to describe the latent semantic information and predict the missing facts. Among various solutions, almost all embedding models have high time and memory-space complexities and, hence, are difficult to apply to large-scale knowledge graphs. Some other embedding models, such as TransE and DistMult, although with lower complexity, ignore inherent features and only use correlations between different entities to represent the features of each entity. To overcome these shortcomings, we present a novel low-complexity embedding model, namely, SimE-ER, to calculate the similarity of entities in independent and associated spaces. In SimE-ER, each entity (relation) is described as two parts. The entity (relation) features in independent space are represented by the features entity (relation) intrinsically owns and, in associated space, the entity (relation) features are expressed by the entity (relation) features they connect. And the similarity between the embeddings of the same entities in different representation spaces is high. In experiments, we evaluate our model with two typical tasks: entity prediction and relation prediction. Compared with the state-of-the-art models, our experimental results demonstrate that SimE-ER outperforms existing competitors and has low time and memory-space complexities.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Kasuya, Naohiko, et Masamichi Takase. « Generic immersions and totally real embeddings ». International Journal of Mathematics 29, no 11 (octobre 2018) : 1850073. http://dx.doi.org/10.1142/s0129167x18500738.

Texte intégral
Résumé :
We show that, for a closed orientable [Formula: see text]-manifold, with [Formula: see text] not congruent to 3 modulo 4, the existence of a CR-regular embedding into complex [Formula: see text]-space ensures the existence of a totally real embedding into complex [Formula: see text]-space. This implies that a closed orientable [Formula: see text]-manifold with non-vanishing Kervaire semi-characteristic possesses no CR-regular embedding into complex [Formula: see text]-space. We also pay special attention to the cases of CR-regular embeddings of spheres and of simply-connected 5-manifolds.
Styles APA, Harvard, Vancouver, ISO, etc.
14

He, Tao, Lianli Gao, Jingkuan Song, Xin Wang, Kejie Huang et Yuanfang Li. « SNEQ : Semi-Supervised Attributed Network Embedding with Attention-Based Quantisation ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 04 (3 avril 2020) : 4091–98. http://dx.doi.org/10.1609/aaai.v34i04.5832.

Texte intégral
Résumé :
Learning accurate low-dimensional embeddings for a network is a crucial task as it facilitates many network analytics tasks. Moreover, the trained embeddings often require a significant amount of space to store, making storage and processing a challenge, especially as large-scale networks become more prevalent. In this paper, we present a novel semi-supervised network embedding and compression method, SNEQ, that is competitive with state-of-art embedding methods while being far more space- and time-efficient. SNEQ incorporates a novel quantisation method based on a self-attention layer that is trained in an end-to-end fashion, which is able to dramatically compress the size of the trained embeddings, thus reduces storage footprint and accelerates retrieval speed. Our evaluation on four real-world networks of diverse characteristics shows that SNEQ outperforms a number of state-of-the-art embedding methods in link prediction, node classification and node recommendation. Moreover, the quantised embedding shows a great advantage in terms of storage and time compared with continuous embeddings as well as hashing methods.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Aull, C. E. « Some embeddings related to C*-embeddings ». Journal of the Australian Mathematical Society. Series A. Pure Mathematics and Statistics 44, no 1 (février 1988) : 88–104. http://dx.doi.org/10.1017/s1446788700031396.

Texte intégral
Résumé :
AbstractA space S is R*-embedded (G*-embedded) in a space X if two disjoint regular closed sets (closure disjoint open sets) of S are contained in disjoint regular closed sets (extended to closure disjoint open sets) of X. A space S is R-extendable to a space X if any regular closed set of S can be extended to a regular closed set of X. It is shown that R*-embedding and G*-embedding are identical with C*-embedding for certain fairly general classes of Tychonoff spaces. Under certain conditions it is shown that R-extendability is related to z-embedding. Spaces in which the regular open sets are C and C*-embedded are also investigated.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Zhang, Yuanpeng, Jingye Guan, Haobo Wang, Kaiming Li, Ying Luo et Qun Zhang. « Generalized Zero-Shot Space Target Recognition Based on Global-Local Visual Feature Embedding Network ». Remote Sensing 15, no 21 (28 octobre 2023) : 5156. http://dx.doi.org/10.3390/rs15215156.

Texte intégral
Résumé :
Existing deep learning-based space target recognition methods rely on abundantly labeled samples and are not capable of recognizing samples from unseen classes without training. In this article, based on generalized zero-shot learning (GZSL), we propose a space target recognition framework to simultaneously recognize space targets from both seen and unseen classes. First, we defined semantic attributes to describe the characteristics of different categories of space targets. Second, we constructed a dual-branch neural network, termed the global-local visual feature embedding network (GLVFENet), which jointly learns global and local visual features to obtain discriminative feature representations, thereby achieving GZSL for space targets with higher accuracy. Specifically, the global visual feature embedding subnetwork (GVFE-Subnet) calculates the compatibility score by measuring the cosine similarity between the projection of global visual features in the semantic space and various semantic vectors, thereby obtaining global visual embeddings. The local visual feature embedding subnetwork (LVFE-Subnet) introduces soft space attention, and an encoder discovers the semantic-guided local regions in the image to then generate local visual embeddings. Finally, the visual embeddings from both branches were combined and matched with semantics. The calibrated stacking method is introduced to achieve GZSL recognition of space targets. Extensive experiments were conducted on an electromagnetic simulation dataset of nine categories of space targets, and the effectiveness of our GLVFENet is confirmed.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Li, Wen, Cheng Zou, Meng Wang, Furong Xu, Jianan Zhao, Ruobing Zheng, Yuan Cheng et Wei Chu. « DC-Former : Diverse and Compact Transformer for Person Re-identification ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 2 (26 juin 2023) : 1415–23. http://dx.doi.org/10.1609/aaai.v37i2.25226.

Texte intégral
Résumé :
In person re-identification (ReID) task, it is still challenging to learn discriminative representation by deep learning, due to limited data. Generally speaking, the model will get better performance when increasing the amount of data. The addition of similar classes strengthens the ability of the classifier to identify similar identities, thereby improving the discrimination of representation. In this paper, we propose a Diverse and Compact Transformer (DC-Former) that can achieve a similar effect by splitting embedding space into multiple diverse and compact subspaces. Compact embedding subspace helps model learn more robust and discriminative embedding to identify similar classes. And the fusion of these diverse embeddings containing more fine-grained information can further improve the effect of ReID. Specifically, multiple class tokens are used in vision transformer to represent multiple embedding spaces. Then, a self-diverse constraint (SDC) is applied to these spaces to push them away from each other, which makes each embedding space diverse and compact. Further, a dynamic weight controller (DWC) is further designed for balancing the relative importance among them during training. The experimental results of our method are promising, which surpass previous state-of-the-art methods on several commonly used person ReID benchmarks. Our code is available at https://github.com/ant-research/Diverse-and-Compact-Transformer.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Xie, Cunxiang, Limin Zhang et Zhaogen Zhong. « Entity Alignment Method Based on Joint Learning of Entity and Attribute Representations ». Applied Sciences 13, no 9 (6 mai 2023) : 5748. http://dx.doi.org/10.3390/app13095748.

Texte intégral
Résumé :
Entity alignment helps discover and link entities from different knowledge graphs (KGs) that refer to the same real-world entity, making it a critical technique for KG fusion. Most entity alignment methods are based on knowledge representation learning, which uses a mapping function to project entities from different KGs into a unified vector space and align them based on calculated similarities. However, this process requires sufficient pre-aligned entity pairs. To address this problem, this study proposes an entity alignment method based on joint learning of entity and attribute representations. Structural embeddings are learned using the triples modeling method based on TransE and PTransE and extracted from the embedding vector space utilizing semantic information from direct and multi-step relation paths. Simultaneously, attribute character embeddings are learned using the N-gram-based compositional function to encode a character sequence for the attribute values, followed by TransE to model attribute triples in the embedding vector space to obtain attribute character embedding vectors. By learning the structural and attribute character embeddings simultaneously, the structural embeddings of entities from different KGs can be transferred into a unified vector space. Lastly, the similarities in the structural embedding of different entities were calculated to perform entity alignment. The experimental results showed that the proposed method performed well on the DBP15K and DWK100K datasets, and it outperformed currently available entity alignment methods by 16.8, 27.5, and 24.0% in precision, recall, and F1 measure, respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Bhowmik, Kowshik, et Anca Ralescu. « Leveraging Vector Space Similarity for Learning Cross-Lingual Word Embeddings : A Systematic Review ». Digital 1, no 3 (1 juillet 2021) : 145–61. http://dx.doi.org/10.3390/digital1030011.

Texte intégral
Résumé :
This article presents a systematic literature review on quantifying the proximity between independently trained monolingual word embedding spaces. A search was carried out in the broader context of inducing bilingual lexicons from cross-lingual word embeddings, especially for low-resource languages. The returned articles were then classified. Cross-lingual word embeddings have drawn the attention of researchers in the field of natural language processing (NLP). Although existing methods have yielded satisfactory results for resource-rich languages and languages related to them, some researchers have pointed out that the same is not true for low-resource and distant languages. In this paper, we report the research on methods proposed to provide better representation for low-resource and distant languages in the cross-lingual word embedding space.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Mai, Sijie, Haifeng Hu et Songlong Xing. « Modality to Modality Translation : An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 01 (3 avril 2020) : 164–72. http://dx.doi.org/10.1609/aaai.v34i01.5347.

Texte intégral
Résumé :
Learning joint embedding space for various modalities is of vital importance for multimodal fusion. Mainstream modality fusion approaches fail to achieve this goal, leaving a modality gap which heavily affects cross-modal fusion. In this paper, we propose a novel adversarial encoder-decoder-classifier framework to learn a modality-invariant embedding space. Since the distributions of various modalities vary in nature, to reduce the modality gap, we translate the distributions of source modalities into that of target modality via their respective encoders using adversarial training. Furthermore, we exert additional constraints on embedding space by introducing reconstruction loss and classification loss. Then we fuse the encoded representations using hierarchical graph neural network which explicitly explores unimodal, bimodal and trimodal interactions in multi-stage. Our method achieves state-of-the-art performance on multiple datasets. Visualization of the learned embeddings suggests that the joint embedding space learned by our method is discriminative.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Zhang, Yiding, Xiao Wang, Nian Liu et Chuan Shi. « Embedding Heterogeneous Information Network in Hyperbolic Spaces ». ACM Transactions on Knowledge Discovery from Data 16, no 2 (30 avril 2022) : 1–23. http://dx.doi.org/10.1145/3468674.

Texte intégral
Résumé :
Heterogeneous information network (HIN) embedding, aiming to project HIN into a low-dimensional space, has attracted considerable research attention. Most of the existing HIN embedding methods focus on preserving the inherent network structure and semantic correlations in Euclidean spaces. However, one fundamental problem is whether the Euclidean spaces are the intrinsic spaces of HIN? Recent researches find the complex network with hyperbolic geometry can naturally reflect some properties, e.g., hierarchical and power-law structure. In this article, we make an effort toward embedding HIN in hyperbolic spaces. We analyze the structures of three HINs and discover some properties, e.g., the power-law distribution, also exist in HINs. Therefore, we propose a novel HIN embedding model HHNE. Specifically, to capture the structure and semantic relations between nodes, HHNE employs the meta-path guided random walk to sample the sequences for each node. Then HHNE exploits the hyperbolic distance as the proximity measurement. We also derive an effective optimization strategy to update the hyperbolic embeddings iteratively. Since HHNE optimizes different relations in a single space, we further propose the extended model HHNE++. HHNE++ models different relations in different spaces, which enables it to learn complex interactions in HINs. The optimization strategy of HHNE++ is also derived to update the parameters of HHNE++ in a principle manner. The experimental results demonstrate the effectiveness of our proposed models.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Johannsen, David A., et Jeffrey L. Solka. « Embedding in space forms ». Journal of Multivariate Analysis 114 (février 2013) : 171–88. http://dx.doi.org/10.1016/j.jmva.2012.06.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Tsitsulin, Anton, Marina Munkhoeva, Davide Mottin, Panagiotis Karras, Ivan Oseledets et Emmanuel Müller. « FREDE ». Proceedings of the VLDB Endowment 14, no 6 (février 2021) : 1102–10. http://dx.doi.org/10.14778/3447689.3447713.

Texte intégral
Résumé :
Low-dimensional representations, or embeddings , of a graph's nodes facilitate several practical data science and data engineering tasks. As such embeddings rely, explicitly or implicitly, on a similarity measure among nodes, they require the computation of a quadratic similarity matrix, inducing a tradeoff between space complexity and embedding quality. To date, no graph embedding work combines (i) linear space complexity, (ii) a nonlinear transform as its basis, and (iii) nontrivial quality guarantees. In this paper we introduce FREDE ( FREquent Directions Embedding ), a graph embedding based on matrix sketching that combines those three desiderata. Starting out from the observation that embedding methods aim to preserve the covariance among the rows of a similarity matrix, FREDE iteratively improves on quality while individually processing rows of a nonlinearly transformed PPR similarity matrix derived from a state-of-the-art graph embedding method and provides, at any iteration , column-covariance approximation guarantees in due course almost indistinguishable from those of the optimal approximation by SVD. Our experimental evaluation on variably sized networks shows that FREDE performs almost as well as SVD and competitively against state-of-the-art embedding methods in diverse data science tasks, even when it is based on as little as 10% of node similarities.
Styles APA, Harvard, Vancouver, ISO, etc.
24

He, Hongliang, Junlei Zhang, Zhenzhong Lan et Yue Zhang. « Instance Smoothed Contrastive Learning for Unsupervised Sentence Embedding ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 11 (26 juin 2023) : 12863–71. http://dx.doi.org/10.1609/aaai.v37i11.26512.

Texte intégral
Résumé :
Contrastive learning-based methods, such as unsup-SimCSE, have achieved state-of-the-art (SOTA) performances in learning unsupervised sentence embeddings. However, in previous studies, each embedding used for contrastive learning only derived from one sentence instance, and we call these embeddings instance-level embeddings. In other words, each embedding is regarded as a unique class of its own, which may hurt the generalization performance. In this study, we propose IS-CSE (instance smoothing contrastive sentence embedding) to smooth the boundaries of embeddings in the feature space. Specifically, we retrieve embeddings from a dynamic memory buffer according to the semantic similarity to get a positive embedding group. Then embeddings in the group are aggregated by a self-attention operation to produce a smoothed instance embedding for further analysis. We evaluate our method on standard semantic text similarity (STS) tasks and achieve an average of 78.30%, 79.47%, 77.73%, and 79.42% Spearman’s correlation on the base of BERT-base, BERT-large, RoBERTa-base, and RoBERTa-large respectively, a 2.05%, 1.06%, 1.16% and 0.52% improvement compared to unsup-SimCSE.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Iskakova, G. Sh, M. S. Aitenova et A. K. Sexenbayeva. « Embeddings of a Multi-Weighted Anisotropic Sobolev Type Space ». BULLETIN OF THE KARAGANDA UNIVERSITY-MATHEMATICS 113, no 1 (29 mars 2024) : 73–83. http://dx.doi.org/10.31489/2024m1/73-83.

Texte intégral
Résumé :
Parameters such as various integral and differential characteristics of functions, smoothness properties of regions and their boundaries, as well as many classes of weight functions cause complex relationships and embedding conditions for multi-weighted anisotropic Sobolev type spaces. The desire not to restrict these parameters leads to the development of new approaches based on the introduction of alternative definitions of spaces and norms in them or on special localization methods. This article examines the embeddings of multi-weighted anisotropic Sobolev type spaces with anisotropy in all the defining characteristics of the norm of space, including differential indices, summability indices, as well as weight coefficients. The applied localization method made it possible to obtain an embedding for the case of an arbitrary domain and weights of a general type, which is important in applications in differential operators’ theory, numerical analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
26

JP, Sanjanasri, Vijay Krishna Menon, Soman KP, Rajendran S et Agnieszka Wolk. « Generation of Cross-Lingual Word Vectors for Low-Resourced Languages Using Deep Learning and Topological Metrics in a Data-Efficient Way ». Electronics 10, no 12 (8 juin 2021) : 1372. http://dx.doi.org/10.3390/electronics10121372.

Texte intégral
Résumé :
Linguists have been focused on a qualitative comparison of the semantics from different languages. Evaluation of the semantic interpretation among disparate language pairs like English and Tamil is an even more formidable task than for Slavic languages. The concept of word embedding in Natural Language Processing (NLP) has enabled a felicitous opportunity to quantify linguistic semantics. Multi-lingual tasks can be performed by projecting the word embeddings of one language onto the semantic space of the other. This research presents a suite of data-efficient deep learning approaches to deduce the transfer function from the embedding space of English to that of Tamil, deploying three popular embedding algorithms: Word2Vec, GloVe and FastText. A novel evaluation paradigm was devised for the generation of embeddings to assess their effectiveness, using the original embeddings as ground truths. Transferability across other target languages of the proposed model was assessed via pre-trained Word2Vec embeddings from Hindi and Chinese languages. We empirically prove that with a bilingual dictionary of a thousand words and a corresponding small monolingual target (Tamil) corpus, useful embeddings can be generated by transfer learning from a well-trained source (English) embedding. Furthermore, we demonstrate the usability of generated target embeddings in a few NLP use-case tasks, such as text summarization, part-of-speech (POS) tagging, and bilingual dictionary induction (BDI), bearing in mind that those are not the only possible applications.
Styles APA, Harvard, Vancouver, ISO, etc.
27

KARN, ANIL K. « ORDER EMBEDDING OF A MATRIX ORDERED SPACE ». Bulletin of the Australian Mathematical Society 84, no 1 (21 juin 2011) : 10–18. http://dx.doi.org/10.1017/s000497271100222x.

Texte intégral
Résumé :
AbstractWe characterize certain properties in a matrix ordered space in order to embed it in a C*-algebra. Let such spaces be called C*-ordered operator spaces. We show that for every self-adjoint operator space there exists a matrix order (on it) to make it a C*-ordered operator space. However, the operator space dual of a (nontrivial) C*-ordered operator space cannot be embedded in any C*-algebra.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Pogány, Domonkos, et Péter Antal. « Towards explainable interaction prediction : Embedding biological hierarchies into hyperbolic interaction space ». PLOS ONE 19, no 3 (21 mars 2024) : e0300906. http://dx.doi.org/10.1371/journal.pone.0300906.

Texte intégral
Résumé :
Given the prolonged timelines and high costs associated with traditional approaches, accelerating drug development is crucial. Computational methods, particularly drug-target interaction prediction, have emerged as efficient tools, yet the explainability of machine learning models remains a challenge. Our work aims to provide more interpretable interaction prediction models using similarity-based prediction in a latent space aligned to biological hierarchies. We investigated integrating drug and protein hierarchies into a joint-embedding drug-target latent space via embedding regularization by conducting a comparative analysis between models employing traditional flat Euclidean vector spaces and those utilizing hyperbolic embeddings. Besides, we provided a latent space analysis as an example to show how we can gain visual insights into the trained model with the help of dimensionality reduction. Our results demonstrate that hierarchy regularization improves interpretability without compromising predictive performance. Furthermore, integrating hyperbolic embeddings, coupled with regularization, enhances the quality of the embedded hierarchy trees. Our approach enables a more informed and insightful application of interaction prediction models in drug discovery by constructing an interpretable hyperbolic latent space, simultaneously incorporating drug and target hierarchies and pairing them with available interaction information. Moreover, compatible with pairwise methods, the approach allows for additional transparency through existing explainable AI solutions.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Zhang, Pei, Guoliang Fan, Chanyue Wu, Dong Wang et Ying Li. « Task-Adaptive Embedding Learning with Dynamic Kernel Fusion for Few-Shot Remote Sensing Scene Classification ». Remote Sensing 13, no 21 (20 octobre 2021) : 4200. http://dx.doi.org/10.3390/rs13214200.

Texte intégral
Résumé :
The central goal of few-shot scene classification is to learn a model that can generalize well to a novel scene category (UNSEEN) from only one or a few labeled examples. Recent works in the Remote Sensing (RS) community tackle this challenge by developing algorithms in a meta-learning manner. However, most prior approaches have either focused on rapidly optimizing a meta-learner or finding good similarity metrics while overlooking the embedding power. Here we propose a novel Task-Adaptive Embedding Learning (TAEL) framework that complements the existing methods by giving full play to feature embedding’s dual roles in few-shot scene classification—representing images and constructing classifiers in the embedding space. First, we design a Dynamic Kernel Fusion Network (DKF-Net) that enriches the diversity and expressive capacity of embeddings by dynamically fusing information from multiple kernels. Second, we present a task-adaptive strategy that helps to generate more discriminative representations by transforming the universal embeddings into task-adaptive embeddings via a self-attention mechanism. We evaluate our model in the standard few-shot learning setting on two challenging datasets: NWPU-RESISC4 and RSD46-WHU. Experimental results demonstrate that, on all tasks, our method achieves state-of-the-art performance by a significant margin.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Liang, Shangsong, Zhuo Ouyang et Zaiqiao Meng. « A Normalizing Flow-Based Co-Embedding Model for Attributed Networks ». ACM Transactions on Knowledge Discovery from Data 16, no 3 (30 juin 2022) : 1–31. http://dx.doi.org/10.1145/3477049.

Texte intégral
Résumé :
Network embedding is a technique that aims at inferring the low-dimensional representations of nodes in a semantic space. In this article, we study the problem of inferring the low-dimensional representations of both nodes and attributes for attributed networks in the same semantic space such that the affinity between a node and an attribute can be effectively measured. Intuitively, this problem can be addressed by simply utilizing existing variational auto-encoder (VAE) based network embedding algorithms. However, the variational posterior distribution in previous VAE based network embedding algorithms is often assumed and restricted to be a mean-field Gaussian distribution or other simple distribution families, which results in poor inference of the embeddings. To alleviate the above defect, we propose a novel VAE-based co-embedding method for attributed network, F-CAN, where posterior distributions are flexible, complex, and scalable distributions constructed through the normalizing flow. We evaluate our proposed models on a number of network tasks with several benchmark datasets. Experimental results demonstrate that there are clear improvements in the qualities of embeddings generated by our model to the state-of-the-art attributed network embedding methods.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Rizkallah, Sandra, Amir F. Atiya et Samir Shaheen. « New Vector-Space Embeddings for Recommender Systems ». Applied Sciences 11, no 14 (13 juillet 2021) : 6477. http://dx.doi.org/10.3390/app11146477.

Texte intégral
Résumé :
In this work, we propose a novel recommender system model based on a technology commonly used in natural language processing called word vector embedding. In this technology, a word is represented by a vector that is embedded in an n-dimensional space. The distance between two vectors expresses the level of similarity/dissimilarity of their underlying words. Since item similarities and user similarities are the basis of designing a successful collaborative filtering, vector embedding seems to be a good candidate. As opposed to words, we propose a vector embedding approach for learning vectors for items and users. There have been very few recent applications of vector embeddings in recommender systems, but they have limitations in the type of formulations that are applicable. We propose a novel vector embedding that is versatile, in the sense that it is applicable for the prediction of ratings and for the recommendation of top items that are likely to appeal to users. It could also possibly take into account content-based features and demographic information. The approach is a simple relaxation algorithm that optimizes an objective function, defined based on target users’, items’ or joint user–item’s similarities in their respective vector spaces. The proposed approach is evaluated using real life datasets such as “MovieLens”, “ModCloth”, “Amazon: Magazine_Subscriptions” and “Online Retail”. The obtained results are compared with some of the leading benchmark methods, and they show a competitive performance.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Nguyen, Huy Manh, Tomo Miyazaki, Yoshihiro Sugaya et Shinichiro Omachi. « Multiple Visual-Semantic Embedding for Video Retrieval from Query Sentence ». Applied Sciences 11, no 7 (3 avril 2021) : 3214. http://dx.doi.org/10.3390/app11073214.

Texte intégral
Résumé :
Visual-semantic embedding aims to learn a joint embedding space where related video and sentence instances are located close to each other. Most existing methods put instances in a single embedding space. However, they struggle to embed instances due to the difficulty of matching visual dynamics in videos to textual features in sentences. A single space is not enough to accommodate various videos and sentences. In this paper, we propose a novel framework that maps instances into multiple individual embedding spaces so that we can capture multiple relationships between instances, leading to compelling video retrieval. We propose to produce a final similarity between instances by fusing similarities measured in each embedding space using a weighted sum strategy. We determine the weights according to a sentence. Therefore, we can flexibly emphasize an embedding space. We conducted sentence-to-video retrieval experiments on a benchmark dataset. The proposed method achieved superior performance, and the results are competitive to state-of-the-art methods. These experimental results demonstrated the effectiveness of the proposed multiple embedding approach compared to existing methods.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Peng, Yanhui, Jing Zhang, Cangqi Zhou et Shunmei Meng. « Knowledge Graph Entity Alignment Using Relation Structural Similarity ». Journal of Database Management 33, no 1 (1 janvier 2022) : 1–19. http://dx.doi.org/10.4018/jdm.305733.

Texte intégral
Résumé :
Embedding-based entity alignment, which represents knowledge graphs as low-dimensional embeddings and finds entities in different knowledge graphs that semantically represent the same real-world entity by measuring the similarities between entity embeddings, has achieved promising results. However, existing methods are still challenged by the error accumulation of embeddings along multi-step paths and the semantic information loss. This paper proposes a novel embedding-based entity alignment method that iteratively aligns both entities and relations with high similarities as training data. Newly-aligned entities and relations are used to calibrate the corresponding embeddings in the unified embedding space, which reduces the error accumulation. To reduce the negative impact of semantic information loss, the authors propose to use relation structural similarity instead of embedding similarity to align relations. Experimental results on five widely used real-world datasets show that the proposed method significantly outperforms several state-of-the-art methods for entity alignment.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Tyshchuk, Kirill, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev et Alexander Panchenko. « On Isotropy of Multimodal Embeddings ». Information 14, no 7 (10 juillet 2023) : 392. http://dx.doi.org/10.3390/info14070392.

Texte intégral
Résumé :
Embeddings, i.e., vector representations of objects, such as texts, images, or graphs, play a key role in deep learning methodologies nowadays. Prior research has shown the importance of analyzing the isotropy of textual embeddings for transformer-based text encoders, such as the BERT model. Anisotropic word embeddings do not use the entire space, instead concentrating on a narrow cone in such a pretrained vector space, negatively affecting the performance of applications, such as textual semantic similarity. Transforming a vector space to optimize isotropy has been shown to be beneficial for improving performance in text processing tasks. This paper is the first comprehensive investigation of the distribution of multimodal embeddings using the example of OpenAI’s CLIP pretrained model. We aimed to deepen the understanding of the embedding space of multimodal embeddings, which has previously been unexplored in this respect, and study the impact on various end tasks. Our initial efforts were focused on measuring the alignment of image and text embedding distributions, with an emphasis on their isotropic properties. In addition, we evaluated several gradient-free approaches to enhance these properties, establishing their efficiency in improving the isotropy/alignment of the embeddings and, in certain cases, the zero-shot classification accuracy. Significantly, our analysis revealed that both CLIP and BERT models yielded embeddings situated within a cone immediately after initialization and preceding training. However, they were mostly isotropic in the local sense. We further extended our investigation to the structure of multilingual CLIP text embeddings, confirming that the observed characteristics were language-independent. By computing the few-shot classification accuracy and point-cloud metrics, we provide evidence of a strong correlation among multilingual embeddings. Embeddings transformation using the methods described in this article makes it easier to visualize embeddings. At the same time, multiple experiments that we conducted showed that, in regard to the transformed embeddings, the downstream tasks performance does not drop substantially (and sometimes is even improved). This means that one could obtain an easily visualizable embedding space, without substantially losing the quality of downstream tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Bagchi, Susmit. « The Sequential and Contractible Topological Embeddings of Functional Groups ». Symmetry 12, no 5 (8 mai 2020) : 789. http://dx.doi.org/10.3390/sym12050789.

Texte intégral
Résumé :
The continuous and injective embeddings of closed curves in Hausdorff topological spaces maintain isometry in subspaces generating components. An embedding of a circle group within a topological space creates isometric subspace with rotational symmetry. This paper introduces the generalized algebraic construction of functional groups and its topological embeddings into normal spaces maintaining homeomorphism of functional groups. The proposed algebraic construction of functional groups maintains homeomorphism to rotationally symmetric circle groups. The embeddings of functional groups are constructed in a sequence in the normal topological spaces. First, the topological decomposition and associated embeddings of a generalized group algebraic structure in the lower dimensional space is presented. It is shown that the one-point compactification property of topological space containing the decomposed group embeddings can be identified. Second, the sequential topological embeddings of functional groups are formulated. The proposed sequential embeddings follow Schoenflies property within the normal topological space. The preservation of homeomorphism between disjoint functional group embeddings under Banach-type contraction is analyzed taking into consideration that the underlying topological space is Hausdorff and the embeddings are in a monotone class. It is shown that components in a monotone class of isometry are not separable, whereas the multiple disjoint monotone class of embeddings are separable. A comparative analysis of the proposed concepts and formulations with respect to the existing structures is included in the paper.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Goel, Anmol, et Ponnurangam Kumaraguru. « Detecting Lexical Semantic Change across Corpora with Smooth Manifolds (Student Abstract) ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 18 (18 mai 2021) : 15783–84. http://dx.doi.org/10.1609/aaai.v35i18.17888.

Texte intégral
Résumé :
Comparing two bodies of text and detecting words with significant lexical semantic shift between them is an important part of digital humanities. Traditional approaches have relied on aligning the different embeddings using the Orthogonal Procrustes problem in the Euclidean space. This study presents a geometric framework that leverages smooth Riemannian manifolds for corpus-specific orthogonal rotations and a corpus-independent scaling metric to project the different vector spaces into a shared latent space. This enables us to capture any affine relationship between the embedding spaces while utilising the rich geometry of smooth manifolds.
Styles APA, Harvard, Vancouver, ISO, etc.
37

BONDER, JULIÁN FERNÁNDEZ, RAFAEL ORIVE et JULIO D. ROSSI. « THE BEST SOBOLEV TRACE CONSTANT IN PERIODIC MEDIA FOR CRITICAL AND SUBCRITICAL EXPONENTS ». Glasgow Mathematical Journal 51, no 3 (septembre 2009) : 619–30. http://dx.doi.org/10.1017/s0017089509990048.

Texte intégral
Résumé :
AbstractIn this paper we study homogenisation problems for Sobolev trace embedding H1(Ω) ↪ Lq(∂Ω) in a bounded smooth domain. When q = 2 this leads to a Steklov-like eigenvalue problem. We deal with the best constant of the Sobolev trace embedding in rapidly oscillating periodic media, and we consider H1 and Lq spaces with weights that are periodic in space. We find that extremals for these embeddings converge to a solution of a homogenised limit problem, and the best trace constant converges to a homogenised best trace constant. Our results are in fact more general; we can also consider general operators of the form aɛ(x, ∇u) with non-linear Neumann boundary conditions. In particular, we can deal with the embedding W1,p(Ω) ↪ Lq(∂Ω).
Styles APA, Harvard, Vancouver, ISO, etc.
38

Toyoda, Tetsu. « An Intrinsic Characterization of Five Points in a CAT(0) Space ». Analysis and Geometry in Metric Spaces 8, no 1 (27 août 2020) : 114–65. http://dx.doi.org/10.1515/agms-2020-0111.

Texte intégral
Résumé :
AbstractGromov (2001) and Sturm (2003) proved that any four points in a CAT(0) space satisfy a certain family of inequalities. We call those inequalities the ⊠-inequalities, following the notation used by Gromov. In this paper, we prove that a metric space X containing at most five points admits an isometric embedding into a CAT(0) space if and only if any four points in X satisfy the ⊠-inequalities. To prove this, we introduce a new family of necessary conditions for a metric space to admit an isometric embedding into a CAT(0) space by modifying and generalizing Gromov’s cycle conditions. Furthermore, we prove that if a metric space satisfies all those necessary conditions, then it admits an isometric embedding into a CAT(0) space. This work presents a new approach to characterizing those metric spaces that admit an isometric embedding into a CAT(0) space.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Das, Kajal. « From the geometry of box spaces to the geometry and measured couplings of groups ». Journal of Topology and Analysis 10, no 02 (juin 2018) : 401–20. http://dx.doi.org/10.1142/s1793525318500127.

Texte intégral
Résumé :
In this paper, we prove that if two “box spaces” of two residually finite groups are coarsely equivalent, then the two groups are “uniform measured equivalent” (UME). More generally, we prove that if there is a coarse embedding of one box space into another box space, then there exists a “uniform measured equivalent embedding” (UME-embedding) of the first group into the second one. This is a reinforcement of the easier fact that a coarse equivalence (resp.ã coarse embedding) between the box spaces gives rise to a coarse equivalence (resp.ã coarse embedding) between the groups. We deduce new invariants that distinguish box spaces up to coarse embedding and coarse equivalence. In particular, we obtain that the expanders coming from [Formula: see text] cannot be coarsely embedded inside the expanders of [Formula: see text], where [Formula: see text] and [Formula: see text]. Moreover, we obtain a countable class of residually finite groups which are mutually coarse-equivalent but any of their box spaces are not coarse-equivalent.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Wang, Haochun, Sendong Zhao, Chi Liu, Nuwa Xi, MuZhen Cai, Bing Qin et Ting Liu. « Manifold-Based Verbalizer Space Re-embedding for Tuning-Free Prompt-Based Classification ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 17 (24 mars 2024) : 19126–34. http://dx.doi.org/10.1609/aaai.v38i17.29880.

Texte intégral
Résumé :
Prompt-based classification adapts tasks to a cloze question format utilizing the [MASK] token and the filled tokens are then mapped to labels through pre-defined verbalizers. Recent studies have explored the use of verbalizer embeddings to reduce labor in this process. However, all existing studies require a tuning process for either the pre-trained models or additional trainable embeddings. Meanwhile, the distance between high-dimensional verbalizer embeddings should not be measured by Euclidean distance due to the potential for non-linear manifolds in the representation space. In this study, we propose a tuning-free manifold-based space re-embedding method called Locally Linear Embedding with Intra-class Neighborhood Constraint (LLE-INC) for verbalizer embeddings, which preserves local properties within the same class as guidance for classification. Experimental results indicate that even without tuning any parameters, our LLE-INC is on par with automated verbalizers with parameter tuning. And with the parameter updating, our approach further enhances prompt-based tuning by up to 3.2%. Furthermore, experiments with the LLaMA-7B&13B indicate that LLE-INC is an efficient tuning-free classification approach for the hyper-scale language models.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Jiang, Yueyu, Puoya Tabaghi et Siavash Mirarab. « Learning Hyperbolic Embedding for Phylogenetic Tree Placement and Updates ». Biology 11, no 9 (24 août 2022) : 1256. http://dx.doi.org/10.3390/biology11091256.

Texte intégral
Résumé :
Phylogenetic placement, used widely in ecological analyses, seeks to add a new species to an existing tree. A deep learning approach was previously proposed to estimate the distance between query and backbone species by building a map from gene sequences to a high-dimensional space that preserves species tree distances. They then use a distance-based placement method to place the queries on that species tree. In this paper, we examine the appropriate geometry for faithfully representing tree distances while embedding gene sequences. Theory predicts that hyperbolic spaces should provide a drastic reduction in distance distortion compared to the conventional Euclidean space. Nevertheless, hyperbolic embedding imposes its own unique challenges related to arithmetic operations, exponentially-growing functions, and limited bit precision, and we address these challenges. Our results confirm that hyperbolic embeddings have substantially lower distance errors than Euclidean space. However, these better-estimated distances do not always lead to better phylogenetic placement. We then show that the deep learning framework can be used not just to place on a backbone tree but to update it to obtain a fully resolved tree. With our hyperbolic embedding framework, species trees can be updated remarkably accurately with only a handful of genes.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Richter, Marcus, et Thomas Schreiber. « Phase space embedding of electrocardiograms ». Physical Review E 58, no 5 (1 novembre 1998) : 6392–98. http://dx.doi.org/10.1103/physreve.58.6392.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Frankl, Nóra, Andrey Kupavskii et Konrad J. Swanepoel. « Embedding graphs in Euclidean space ». Journal of Combinatorial Theory, Series A 171 (avril 2020) : 105146. http://dx.doi.org/10.1016/j.jcta.2019.105146.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Frankl, Nóra, Andrey Kupavskii et Konrad J. Swanepoel. « Embedding graphs in Euclidean space ». Electronic Notes in Discrete Mathematics 61 (août 2017) : 475–81. http://dx.doi.org/10.1016/j.endm.2017.06.076.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Chang, Haw-Shiuan, Amol Agrawal et Andrew McCallum. « Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 8 (18 mai 2021) : 6956–65. http://dx.doi.org/10.1609/aaai.v35i8.16857.

Texte intégral
Résumé :
Most unsupervised NLP models represent each word with a single point or single region in semantic space, while the existing multi-sense word embeddings cannot represent longer word sequences like phrases or sentences. We propose a novel embedding method for a text sequence (a phrase or a sentence) where each sequence is represented by a distinct set of multi-mode codebook embeddings to capture different semantic facets of its meaning. The codebook embeddings can be viewed as the cluster centers which summarize the distribution of possibly co-occurring words in a pre-trained word embedding space. We introduce an end-to-end trainable neural model that directly predicts the set of cluster centers from the input text sequence during test time. Our experiments show that the per-sentence codebook embeddings significantly improve the performances in unsupervised sentence similarity and extractive summarization benchmarks. In phrase similarity experiments, we discover that the multi-facet embeddings provide an interpretable semantic representation but do not outperform the single-facet baseline.
Styles APA, Harvard, Vancouver, ISO, etc.
46

KLEIN, JOHN R. « EMBEDDINGS, NORMAL INVARIANTS AND FUNCTOR CALCULUS ». Nagoya Mathematical Journal 225 (19 août 2016) : 152–84. http://dx.doi.org/10.1017/nmj.2016.37.

Texte intégral
Résumé :
This paper investigates the space of codimension zero embeddings of a Poincaré duality space in a disk. One of our main results exhibits a tower that interpolates from the space of Poincaré immersions to a certain space of “unlinked” Poincaré embeddings. The layers of this tower are described in terms of the coefficient spectra of the identity appearing in Goodwillie’s homotopy functor calculus. We also answer a question posed to us by Sylvain Cappell. The appendix proposes a conjectural relationship between our tower and the manifold calculus tower for the smooth embedding space.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Zhang, Pengfei, Dong Chen, Yang Fang, Xiang Zhao et Weidong Xiao. « CIST : Differentiating Concepts and Instances Based on Spatial Transformation for Knowledge Graph Embedding ». Mathematics 10, no 17 (2 septembre 2022) : 3161. http://dx.doi.org/10.3390/math10173161.

Texte intégral
Résumé :
Knowledge representation learning is representing entities and relations in a knowledge graph as dense low-dimensional vectors in the continuous space, which explores the features and properties of the graph. Such a technique can facilitate the computation and reasoning on the knowledge graphs, which benefits many downstream tasks. In order to alleviate the problem of insufficient entity representation learning caused by sparse knowledge graphs, some researchers propose knowledge graph embedding models based on instances and concepts, which utilize the latent semantic connections between concepts and instances contained in the knowledge graphs to enhance the knowledge graph embedding. However, they model instances and concepts in the same space or ignore the transitivity of isA relations, leading to inaccurate embeddings of concepts and instances. To address the above shortcomings, we propose a knowledge graph embedding model that differentiates concepts and instances based on spatial transformation—CIST. The model alleviates the gathering issue of similar instances or concepts in the semantic space by modeling them in different embedding spaces, and adds a learnable parameter to adjust the neighboring range for concept embedding to distinguish hierarchical information of different concepts, thus modeling the transitivity of isA relations. The above features of instances and concepts serve as auxiliary information so that thoroughly modeling them could alleviate the insufficient entity representation learning issue. For the experiments, we chose two tasks, i.e., link prediction and triple classification, and two real-life datasets: YAGO26K-906 and DB111K-174. Compared with state of the arts, CIST achieves an optimal performance in most cases. Specifically, CIST outperforms the SOTA model JOIE by 51.1% on Hits@1 in link prediction and 15.2% on F1 score in triple classification.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Monnin, Pierre, Chedy Raïssi, Amedeo Napoli et Adrien Coulet. « Discovering alignment relations with Graph Convolutional Networks : A biomedical case study ». Semantic Web 13, no 3 (6 avril 2022) : 379–98. http://dx.doi.org/10.3233/sw-210452.

Texte intégral
Résumé :
Knowledge graphs are freely aggregated, published, and edited in the Web of data, and thus may overlap. Hence, a key task resides in aligning (or matching) their content. This task encompasses the identification, within an aggregated knowledge graph, of nodes that are equivalent, more specific, or weakly related. In this article, we propose to match nodes within a knowledge graph by (i) learning node embeddings with Graph Convolutional Networks such that similar nodes have low distances in the embedding space, and (ii) clustering nodes based on their embeddings, in order to suggest alignment relations between nodes of a same cluster. We conducted experiments with this approach on the real world application of aligning knowledge in the field of pharmacogenomics, which motivated our study. We particularly investigated the interplay between domain knowledge and GCN models with the two following focuses. First, we applied inference rules associated with domain knowledge, independently or combined, before learning node embeddings, and we measured the improvements in matching results. Second, while our GCN model is agnostic to the exact alignment relations (e.g., equivalence, weak similarity), we observed that distances in the embedding space are coherent with the “strength” of these different relations (e.g., smaller distances for equivalences), letting us considering clustering and distances in the embedding space as a means to suggest alignment relations in our case study.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Manjunath, G. « Embedding information onto a dynamical system ». Nonlinearity 35, no 3 (21 janvier 2022) : 1131–51. http://dx.doi.org/10.1088/1361-6544/ac4817.

Texte intégral
Résumé :
Abstract The celebrated Takens’ embedding theorem concerns embedding an attractor of a dynamical system in a Euclidean space of appropriate dimension through a generic delay-observation map. The embedding also establishes a topological conjugacy. In this paper, we show how an arbitrary sequence can be mapped into another space as an attractive solution of a nonautonomous dynamical system. Such mapping also entails a topological conjugacy and an embedding between the sequence and the attractive solution spaces. This result is not a generalisation of Takens embedding theorem but helps us understand what exactly is required by discrete-time state space models widely used in applications to embed an external stimulus onto its solution space. Our results settle another basic problem concerning the perturbation of an autonomous dynamical system. We describe what exactly happens to the dynamics when exogenous noise perturbs continuously a local irreducible attracting set (such as a stable fixed point) of a discrete-time autonomous dynamical system.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Sun, Ke, Shuo Yu, Ciyuan Peng, Yueru Wang, Osama Alfarraj, Amr Tolba et Feng Xia. « Relational Structure-Aware Knowledge Graph Representation in Complex Space ». Mathematics 10, no 11 (4 juin 2022) : 1930. http://dx.doi.org/10.3390/math10111930.

Texte intégral
Résumé :
Relations in knowledge graphs have rich relational structures and various binary relational patterns. Various relation modelling strategies are proposed for embedding knowledge graphs, but they fail to fully capture both features of relations, rich relational structures and various binary relational patterns. To address the problem of insufficient embedding due to the complexity of the relations, we propose a novel knowledge graph representation model in complex space, namely MARS, to exploit complex relations to embed knowledge graphs. MARS takes the mechanisms of complex numbers and message-passing and then embeds triplets into relation-specific complex hyperplanes. Thus, MARS can well preserve various relation patterns, as well as structural information in knowledge graphs. In addition, we find that the scores generated from the score function approximate a Gaussian distribution. The scores in the tail cannot effectively represent triplets. To address this particular issue and improve the precision of embeddings, we use the standard deviation to limit the dispersion of the score distribution, resulting in more accurate embeddings of triplets. Comprehensive experiments on multiple benchmarks demonstrate that our model significantly outperforms existing state-of-the-art models for link prediction and triple classification.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie