Auswahl der wissenschaftlichen Literatur zum Thema „Vectorial embeddings“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Vectorial embeddings" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Vectorial embeddings"

1

Rydhe, Eskil. „Vectorial Hankel operators, Carleson embeddings, and notions of BMOA“. Geometric and Functional Analysis 27, Nr. 2 (07.03.2017): 427–51. http://dx.doi.org/10.1007/s00039-017-0400-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Szymański, Piotr. „A broadband multistate interferometer for impedance measurement“. Journal of Telecommunications and Information Technology, Nr. 2 (30.06.2005): 29–33. http://dx.doi.org/10.26636/jtit.2005.2.311.

Der volle Inhalt der Quelle
Annotation:
We present a new four-state interferometer for measuring vectorial reflection coefficient from 50 to 1800 MHz. The interferometer is composed of a four-state phase shifter, a double-directional coupler and a spectrum analyzer with an in-built tracking generator. We describe a design of the interferometer and methods developed for its calibration and de-embedding the measurements. Experimental data verify good accuracy of the impedance measurement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hammer, Barbara, und Alexander Hasenfuss. „Topographic Mapping of Large Dissimilarity Data Sets“. Neural Computation 22, Nr. 9 (September 2010): 2229–84. http://dx.doi.org/10.1162/neco_a_00012.

Der volle Inhalt der Quelle
Annotation:
Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectorial data sets; they require a classical feature encoding of information. Often data are available in the form of pairwise distances only, such as arise from a kernel matrix, a graph, or some general dissimilarity measure. In such cases, NG and SOM cannot be applied directly. In this article, we introduce relational topographic maps as an extension of relational clustering algorithms, which offer prototype-based representations of dissimilarity data, to incorporate neighborhood structure. These methods are equivalent to the standard (vectorial) techniques if a Euclidean embedding exists, while preventing the need to explicitly compute such an embedding. Extending these techniques for the general case of non-Euclidean dissimilarities makes possible an interpretation of relational clustering as clustering in pseudo-Euclidean space. We compare the methods to well-known clustering methods for proximity data based on deterministic annealing and discuss how far convergence can be guaranteed in the general case. Relational clustering is quadratic in the number of data points, which makes the algorithms infeasible for huge data sets. We propose an approximate patch version of relational clustering that runs in linear time. The effectiveness of the methods is demonstrated in a number of examples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

RIESEN, KASPAR, und HORST BUNKE. „GRAPH CLASSIFICATION BASED ON VECTOR SPACE EMBEDDING“. International Journal of Pattern Recognition and Artificial Intelligence 23, Nr. 06 (September 2009): 1053–81. http://dx.doi.org/10.1142/s021800140900748x.

Der volle Inhalt der Quelle
Annotation:
Graphs provide us with a powerful and flexible representation formalism for pattern classification. Many classification algorithms have been proposed in the literature. However, the vast majority of these algorithms rely on vectorial data descriptions and cannot directly be applied to graphs. Recently, a growing interest in graph kernel methods can be observed. Graph kernels aim at bridging the gap between the high representational power and flexibility of graphs and the large amount of algorithms available for object representations in terms of feature vectors. In the present paper, we propose an approach transforming graphs into n-dimensional real vectors by means of prototype selection and graph edit distance computation. This approach allows one to build graph kernels in a straightforward way. It is not only applicable to graphs, but also to other kind of symbolic data in conjunction with any kind of dissimilarity measure. Thus it is characterized by a high degree of flexibility. With several experimental results, we prove the robustness and flexibility of our new method and show that our approach outperforms other graph classification methods on several graph data sets of diverse nature.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zhu, Huiming, Chunhui He, Yang Fang, Bin Ge, Meng Xing und Weidong Xiao. „Patent Automatic Classification Based on Symmetric Hierarchical Convolution Neural Network“. Symmetry 12, Nr. 2 (21.01.2020): 186. http://dx.doi.org/10.3390/sym12020186.

Der volle Inhalt der Quelle
Annotation:
With the rapid growth of patent applications, it has become an urgent problem to automatically classify the accepted patent application documents accurately and quickly. Most previous patent automatic classification studies are based on feature engineering and traditional machine learning methods like SVM, and some even rely on the knowledge of domain experts, hence they suffer from low accuracy problem and have poor generalization ability. In this paper, we propose a patent automatic classification method via the symmetric hierarchical convolution neural network (CNN) named PAC-HCNN. We use the title and abstract of the patent as the input data, and then apply the word embedding technique to segment and vectorize the input data. Then we design a symmetric hierarchical CNN framework to classify the patents based on the word embeddings, which is much more efficient than traditional RNN models dealing with texts, meanwhile keeping the history and future information of the input sequence. We also add gated linear units (GLUs) and residual connection to help realize the deep CNN. Additionally, we equip our model with a self attention mechanism to address the long-term dependency problem. Experiments are performed on large-scale datasets for Chinese short text patent classification. Experimental results prove our proposed model’s effectiveness, and it performs better than other state-of-the-art models significantly and consistently on both fine-grained and coarse-grained classification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ji, Jiayi, Yunpeng Luo, Xiaoshuai Sun, Fuhai Chen, Gen Luo, Yongjian Wu, Yue Gao und Rongrong Ji. „Improving Image Captioning by Leveraging Intra- and Inter-layer Global Representation in Transformer Network“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 2 (18.05.2021): 1655–63. http://dx.doi.org/10.1609/aaai.v35i2.16258.

Der volle Inhalt der Quelle
Annotation:
Transformer-based architectures have shown great success in image captioning, where object regions are encoded and then attended into the vectorial representations to guide the caption decoding. However, such vectorial representations only contain region-level information without considering the global information reflecting the entire image, which fails to expand the capability of complex multi-modal reasoning in image captioning. In this paper, we introduce a Global Enhanced Transformer (termed GET) to enable the extraction of a more comprehensive global representation, and then adaptively guide the decoder to generate high-quality captions. In GET, a Global Enhanced Encoder is designed for the embedding of the global feature, and a Global Adaptive Decoder are designed for the guidance of the caption generation. The former models intra- and inter-layer global representation by taking advantage of the proposed Global Enhanced Attention and a layer-wise fusion module. The latter contains a Global Adaptive Controller that can adaptively fuse the global information into the decoder to guide the caption generation. Extensive experiments on MS COCO dataset demonstrate the superiority of our GET over many state-of-the-arts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Dutta, Anjan, Pau Riba, Josep Lladós und Alicia Fornés. „Hierarchical stochastic graphlet embedding for graph-based pattern recognition“. Neural Computing and Applications 32, Nr. 15 (06.12.2019): 11579–96. http://dx.doi.org/10.1007/s00521-019-04642-7.

Der volle Inhalt der Quelle
Annotation:
AbstractDespite being very successful within the pattern recognition and machine learning community, graph-based methods are often unusable because of the lack of mathematical operations defined in graph domain. Graph embedding, which maps graphs to a vectorial space, has been proposed as a way to tackle these difficulties enabling the use of standard machine learning techniques. However, it is well known that graph embedding functions usually suffer from the loss of structural information. In this paper, we consider the hierarchical structure of a graph as a way to mitigate this loss of information. The hierarchical structure is constructed by topologically clustering the graph nodes and considering each cluster as a node in the upper hierarchical level. Once this hierarchical structure is constructed, we consider several configurations to define the mapping into a vector space given a classical graph embedding, in particular, we propose to make use of the stochastic graphlet embedding (SGE). Broadly speaking, SGE produces a distribution of uniformly sampled low-to-high-order graphlets as a way to embed graphs into the vector space. In what follows, the coarse-to-fine structure of a graph hierarchy and the statistics fetched by the SGE complements each other and includes important structural information with varied contexts. Altogether, these two techniques substantially cope with the usual information loss involved in graph embedding techniques, obtaining a more robust graph representation. This fact has been corroborated through a detailed experimental evaluation on various benchmark graph datasets, where we outperform the state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Szemenyei, Márton, und Ferenc Vajda. „3D Object Detection and Scene Optimization for Tangible Augmented Reality“. Periodica Polytechnica Electrical Engineering and Computer Science 62, Nr. 2 (23.05.2018): 25–37. http://dx.doi.org/10.3311/ppee.10482.

Der volle Inhalt der Quelle
Annotation:
Object recognition in 3D scenes is one of the fundamental tasks in computer vision. It is used frequently in robotics or augmented reality applications [1]. In our work we intend to apply 3D shape recognition to create a Tangible Augmented Reality system that is able to pair virtual and real objects in natural indoors scenes. In this paper we present a method for arranging virtual objects in a real-world scene based on primitive shape graphs. For our scheme, we propose a graph node embedding algorithm for graphs with vectorial nodes and edges, and genetic operators designed to improve the quality of the global setup of virtual objects. We show that our methods improve the quality of the arrangement significantly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Shrock, R. „Recent results on renormalization-group evolution of theories with gauge, fermion, and scalar fields“. International Journal of Modern Physics A 32, Nr. 35 (20.12.2017): 1747007. http://dx.doi.org/10.1142/s0217751x17470078.

Der volle Inhalt der Quelle
Annotation:
We discuss recent results on renormalization-group evolution of several types of theories. First, we consider asymptotically free vectorial gauge theories with various fermion contents and discuss higher-loop calculations of the UV to IR evolution in these theories, including an IR zero of the beta function and the value of the anomalous dimension [Formula: see text] at this point, together with comparisons with lattice measurements. Effects of scheme transformations are discussed. We then present a novel way to determine the value of [Formula: see text] in an [Formula: see text] technicolor model from a particular embedding in an extended technicolor theory. Finally, we analyze the renormalization-group behavior of several non-asymptotically free theories, including a U(1) gauge theory, a non-Abelian gauge theory with many fermions, an [Formula: see text] [Formula: see text] scalar theory, and Yukawa theories.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

MACHET, B. „COMMENTS ON THE STANDARD MODEL OF ELECTROWEAK INTERACTIONS“. International Journal of Modern Physics A 11, Nr. 01 (10.01.1996): 29–63. http://dx.doi.org/10.1142/s0217751x96000031.

Der volle Inhalt der Quelle
Annotation:
The Standard Model of electroweak interactions is shown to include a gauge theory for the observed scalar and pseudoscalar mesons. This is done by exploiting the consequences of embedding the SU(2)L×U(1) group into the chiral group of strong interactions and by explicitly considering as composite the Higgs boson and its three companions inside the standard scalar four-plet. No extra scale of interaction is needed. Quantizing by the Feynman path integral reveals how, in the “Nambu-Jona-Lasinio approximation,” the quarks and the Higgs boson become unobservable, and the theory anomaly-free. Nevertheless, the “anomalous” couplings of the pseudoscalar mesons to gauge fields spring again from the constraints associated with their compositeness itself. This work is the complement of Ref. 1, where the leptonic sector was shown to be compatible with a purely vectorial theory and, consequently, to be also anomaly-free. The bond between quarks and leptons loosens.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Vectorial embeddings"

1

Cvetkov-Iliev, Alexis. „Embedding models for relational data analytics“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG004.

Der volle Inhalt der Quelle
Annotation:
L'analyse de données, par exemple via des modèles d'apprentissage automatique, requiert généralement qu'elles soient regroupées en une table unique décrivant les entités analysées par un nombre fixe d'attributs ou features. En pratique cependant, la plupart des jeux de données sont relationnels (cf. bases de données relationnelles et graphes de connaissance), où l'information sur les entités d'intérêt est irrégulière et dispersée à travers plusieurs sources. Pour analyser de telles données, il est alors nécessaire de les assembler dans une structure unique (généralement une table), ce qui demande du temps et de l'expertise. À la place, nous étudions dans cette thèse le potentiel des modèles d'embedding pour faciliter l'assemblage et l'intégration de données relationnelles. Nous nous intéressons particulièrement aux deux problèmes suivants : 1) l'appariement d'entités (par exemple "Paris" et "Paris, FR"), qui est souvent nécessaire lorsque les données proviennent de sources ayant des manières différentes de représenter la même information ; et 2) le feature engineering sur des données relationnelles pour enrichir l'analyse de données avec de l'information externe. Enfin, nous montrons que les modèles d'embedding sont des outils prometteurs pour l'analyse de données relationnelles : 1) utiliser de "bonnes" représentations vectorielles (i.e. embeddings) d'entités peut remplacer l'appariement manuel d'entités, sans compromettre la qualité des analyses en aval ; et 2) apprendre des embeddings d'entités directement sur des données relationnelles est un moyen efficace et applicable à de grands jeu de données d'automatiser le feature engineering. Ceci ouvre la voie vers l'apprentissage de représentations généralistes d'entités, facilement utilisables dans de nombreuses applications
Analytical pipelines, such as those relying on machine learning models, typically require data in the form of a single table describing the entities under study with a fixed set of attributes or features. In practice however, data often come as relational data (e.g. relational databases or knowledge graphs), where information on the entities of interest is irregular and scattered across sources. To leverage this relational data, it must thus be assembled into a format suitable for analysis, which requires time and expertise from the analyst. As an alternative, we investigate in this thesis the potential of embedding models to facilitate relational data assembling. We especially consider two data integration problems: 1) entity matching (e.g. linking "Paris" and "Paris, FR") when dealing with non-normalized data sources that have different knowledge-representation conventions; and 2) feature engineering over relational data to enrich data analyses with background information. Finally, we show that embedding models are indeed promising tools for relational data analytics: 1) "good" vectorial representations (a.k.a. embeddings) of entities can replace manual entity matching without hindering the quality of subsequent analyses; and 2) entity embeddings learned directly over relational data can automate feature engineering in an efficient and scalable way, paving the way for general-purpose representations that can bring background information in various downstream tasks
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Chinea, Ríos Mara. „Advanced techniques for domain adaptation in Statistical Machine Translation“. Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/117611.

Der volle Inhalt der Quelle
Annotation:
[ES] La Traducción Automática Estadística es un sup-campo de la lingüística computacional que investiga como emplear los ordenadores en el proceso de traducción de un texto de un lenguaje humano a otro. La traducción automática estadística es el enfoque más popular que se emplea para construir estos sistemas de traducción automáticos. La calidad de dichos sistemas depende en gran medida de los ejemplos de traducción que se emplean durante los procesos de entrenamiento y adaptación de los modelos. Los conjuntos de datos empleados son obtenidos a partir de una gran variedad de fuentes y en muchos casos puede que no tengamos a mano los datos más adecuados para un dominio específico. Dado este problema de carencia de datos, la idea principal para solucionarlo es encontrar aquellos conjuntos de datos más adecuados para entrenar o adaptar un sistema de traducción. En este sentido, esta tesis propone un conjunto de técnicas de selección de datos que identifican los datos bilingües más relevantes para una tarea extraídos de un gran conjunto de datos. Como primer paso en esta tesis, las técnicas de selección de datos son aplicadas para mejorar la calidad de la traducción de los sistemas de traducción bajo el paradigma basado en frases. Estas técnicas se basan en el concepto de representación continua de las palabras o las oraciones en un espacio vectorial. Los resultados experimentales demuestran que las técnicas utilizadas son efectivas para diferentes lenguajes y dominios. El paradigma de Traducción Automática Neuronal también fue aplicado en esta tesis. Dentro de este paradigma, investigamos la aplicación que pueden tener las técnicas de selección de datos anteriormente validadas en el paradigma basado en frases. El trabajo realizado se centró en la utilización de dos tareas diferentes de adaptación del sistema. Por un lado, investigamos cómo aumentar la calidad de traducción del sistema, aumentando el tamaño del conjunto de entrenamiento. Por otro lado, el método de selección de datos se empleó para crear un conjunto de datos sintéticos. Los experimentos se realizaron para diferentes dominios y los resultados de traducción obtenidos son convincentes para ambas tareas. Finalmente, cabe señalar que las técnicas desarrolladas y presentadas a lo largo de esta tesis pueden implementarse fácilmente dentro de un escenario de traducción real.
[CAT] La Traducció Automàtica Estadística és un sup-camp de la lingüística computacional que investiga com emprar els ordinadors en el procés de traducció d'un text d'un llenguatge humà a un altre. La traducció automàtica estadística és l'enfocament més popular que s'empra per a construir aquests sistemes de traducció automàtics. La qualitat d'aquests sistemes depèn en gran mesura dels exemples de traducció que s'empren durant els processos d'entrenament i adaptació dels models. Els conjunts de dades emprades són obtinguts a partir d'una gran varietat de fonts i en molts casos pot ser que no tinguem a mà les dades més adequades per a un domini específic. Donat aquest problema de manca de dades, la idea principal per a solucionar-ho és trobar aquells conjunts de dades més adequades per a entrenar o adaptar un sistema de traducció. En aquest sentit, aquesta tesi proposa un conjunt de tècniques de selecció de dades que identifiquen les dades bilingües més rellevants per a una tasca extrets d'un gran conjunt de dades. Com a primer pas en aquesta tesi, les tècniques de selecció de dades són aplicades per a millorar la qualitat de la traducció dels sistemes de traducció sota el paradigma basat en frases. Aquestes tècniques es basen en el concepte de representació contínua de les paraules o les oracions en un espai vectorial. Els resultats experimentals demostren que les tècniques utilitzades són efectives per a diferents llenguatges i dominis. El paradigma de Traducció Automàtica Neuronal també va ser aplicat en aquesta tesi. Dins d'aquest paradigma, investiguem l'aplicació que poden tenir les tècniques de selecció de dades anteriorment validades en el paradigma basat en frases. El treball realitzat es va centrar en la utilització de dues tasques diferents. D'una banda, investiguem com augmentar la qualitat de traducció del sistema, augmentant la grandària del conjunt d'entrenament. D'altra banda, el mètode de selecció de dades es va emprar per a crear un conjunt de dades sintètiques. Els experiments es van realitzar per a diferents dominis i els resultats de traducció obtinguts són convincents per a ambdues tasques. Finalment, cal assenyalar que les tècniques desenvolupades i presentades al llarg d'aquesta tesi poden implementar-se fàcilment dins d'un escenari de traducció real.
[EN] La Traducció Automàtica Estadística és un sup-camp de la lingüística computacional que investiga com emprar els ordinadors en el procés de traducció d'un text d'un llenguatge humà a un altre. La traducció automàtica estadística és l'enfocament més popular que s'empra per a construir aquests sistemes de traducció automàtics. La qualitat d'aquests sistemes depèn en gran mesura dels exemples de traducció que s'empren durant els processos d'entrenament i adaptació dels models. Els conjunts de dades emprades són obtinguts a partir d'una gran varietat de fonts i en molts casos pot ser que no tinguem a mà les dades més adequades per a un domini específic. Donat aquest problema de manca de dades, la idea principal per a solucionar-ho és trobar aquells conjunts de dades més adequades per a entrenar o adaptar un sistema de traducció. En aquest sentit, aquesta tesi proposa un conjunt de tècniques de selecció de dades que identifiquen les dades bilingües més rellevants per a una tasca extrets d'un gran conjunt de dades. Com a primer pas en aquesta tesi, les tècniques de selecció de dades són aplicades per a millorar la qualitat de la traducció dels sistemes de traducció sota el paradigma basat en frases. Aquestes tècniques es basen en el concepte de representació contínua de les paraules o les oracions en un espai vectorial. Els resultats experimentals demostren que les tècniques utilitzades són efectives per a diferents llenguatges i dominis. El paradigma de Traducció Automàtica Neuronal també va ser aplicat en aquesta tesi. Dins d'aquest paradigma, investiguem l'aplicació que poden tenir les tècniques de selecció de dades anteriorment validades en el paradigma basat en frases. El treball realitzat es va centrar en la utilització de dues tasques diferents d'adaptació del sistema. D'una banda, investiguem com augmentar la qualitat de traducció del sistema, augmentant la grandària del conjunt d'entrenament. D'altra banda, el mètode de selecció de dades es va emprar per a crear un conjunt de dades sintètiques. Els experiments es van realitzar per a diferents dominis i els resultats de traducció obtinguts són convincents per a ambdues tasques. Finalment, cal assenyalar que les tècniques desenvolupades i presentades al llarg d'aquesta tesi poden implementar-se fàcilment dins d'un escenari de traducció real.
Chinea Ríos, M. (2019). Advanced techniques for domain adaptation in Statistical Machine Translation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/117611
TESIS
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Vectorial embeddings"

1

Guo, Yi, Junbin Gao und Paul W. Kwan. „Regularized Kernel Local Linear Embedding on Dimensionality Reduction for Non-vectorial Data“. In AI 2009: Advances in Artificial Intelligence, 240–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10439-8_25.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yona, Golan. „Embedding Algorithms and Vectorial Representations“. In Introduction to Computational Proteomics, 459–504. Chapman and Hall/CRC, 2010. http://dx.doi.org/10.1201/9781420010770-11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Vectorial embeddings"

1

Aoun, Paulo Henrique Calado, Andre C. A. Nascimento und Adenilton J. Da Silva. „Evaluation of Dimensionality Reduction and Truncation Techniques for Word Embeddings“. In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4477.

Der volle Inhalt der Quelle
Annotation:
The use of word embeddings is becoming very common in many Natural Language Processing tasks. Most of the time, these require computacional resources that can not be found in most part of the current mobile devices. In this work, we evaluate a combination of numeric truncation and dimensionality reduction strategies in order to obtain smaller vectorial representations without substancial losses in performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Gutierrez-Vasquez, Ximena, und Victor Mijangos. „Low-resource bilingual lexicon extraction using graph based word embeddings“. In LatinX in AI at Neural Information Processing Systems Conference 2018. Journal of LatinX in AI Research, 2018. http://dx.doi.org/10.52591/lxai2018120323.

Der volle Inhalt der Quelle
Annotation:
In this work we focus on the task of automatically extracting bilingual lexicon for the language pair Spanish-Nahuatl. This is a low-resource setting where only a small amount of parallel corpus is available. Most of the downstream methods do not work well under low-resources conditions. This is specially true for the approaches that use vectorial representations like Word2Vec. Our proposal is to construct bilingual word vectors from a graph. This graph is generated using translation pairs obtained from an unsupervised word alignment method. We show that, in a low-resource setting, these type of vectors are successful in representing words in a bilingual semantic space. Moreover, when a linear transformation is applied to translate words from one language to another, our graph based representations considerably outperform the popular setting that uses Word2Vec.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Guo, Yi, Junbin Gao und Paul Kwan. „Visualization of Non-vectorial Data Using Twin Kernel Embedding“. In 2006 International Workshop on Integrating AI and Data Mining. IEEE, 2006. http://dx.doi.org/10.1109/aidm.2006.18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Guo, Yi, Junbin Gao und Paul W. Kwan. „Learning Out-Of Sample Mapping in Non-Vectorial Data Reduction using Constrained Twin Kernel Embedding“. In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370108.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie