Academic literature on the topic 'Vectorial embeddings'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vectorial embeddings.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Vectorial embeddings"

1

Rydhe, Eskil. "Vectorial Hankel operators, Carleson embeddings, and notions of BMOA." Geometric and Functional Analysis 27, no. 2 (2017): 427–51. http://dx.doi.org/10.1007/s00039-017-0400-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ion, Radu, Vasile Păiș, Verginica Barbu Mititelu, et al. "Unsupervised Word Sense Disambiguation Using Transformer’s Attention Mechanism." Machine Learning and Knowledge Extraction 7, no. 1 (2025): 10. https://doi.org/10.3390/make7010010.

Full text
Abstract:
Transformer models produce advanced text representations that have been used to break through the hard challenge of natural language understanding. Using the Transformer’s attention mechanism, which acts as a language learning memory, trained on tens of billions of words, a word sense disambiguation (WSD) algorithm can now construct a more faithful vectorial representation of the context of a word to be disambiguated. Working with a set of 34 lemmas of nouns, verbs, adjectives and adverbs selected from the National Reference Corpus of Romanian (CoRoLa), we show that using BERT’s attention head
APA, Harvard, Vancouver, ISO, and other styles
3

Podda, Marco, Castrense Savojardo, Pier Luigi Martelli, et al. "A descriptor-free machine learning framework to improve antigen discovery for bacterial pathogens." PLOS One 20, no. 6 (2025): e0323895. https://doi.org/10.1371/journal.pone.0323895.

Full text
Abstract:
Identifying protective antigens (PAs), i.e., targets for bacterial vaccines, is challenging as conducting in-vivo tests at the proteome scale is impractical. Reverse Vaccinology (RV) aids in narrowing down the pool of candidates through computational screening of proteomes. Within RV, one prominent approach is to train Machine Learning (ML) models to classify PAs. These models can be used to predict unseen protein sequences and assist researchers in selecting promising candidates. Traditionally, proteins are fed into these models as vectors of biological and physico-chemical descriptors derive
APA, Harvard, Vancouver, ISO, and other styles
4

Szymański, Piotr. "A broadband multistate interferometer for impedance measurement." Journal of Telecommunications and Information Technology, no. 2 (June 30, 2005): 29–33. http://dx.doi.org/10.26636/jtit.2005.2.311.

Full text
Abstract:
We present a new four-state interferometer for measuring vectorial reflection coefficient from 50 to 1800 MHz. The interferometer is composed of a four-state phase shifter, a double-directional coupler and a spectrum analyzer with an in-built tracking generator. We describe a design of the interferometer and methods developed for its calibration and de-embedding the measurements. Experimental data verify good accuracy of the impedance measurement.
APA, Harvard, Vancouver, ISO, and other styles
5

Hammer, Barbara, and Alexander Hasenfuss. "Topographic Mapping of Large Dissimilarity Data Sets." Neural Computation 22, no. 9 (2010): 2229–84. http://dx.doi.org/10.1162/neco_a_00012.

Full text
Abstract:
Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectorial data sets; they require a classical feature encoding of information. Often data are available in the form of pairwise distances only, such as arise from a kernel matrix, a graph, or some general dissimilarity measure. In such cases, NG and SOM cannot be applied directly. In thi
APA, Harvard, Vancouver, ISO, and other styles
6

RIESEN, KASPAR, and HORST BUNKE. "GRAPH CLASSIFICATION BASED ON VECTOR SPACE EMBEDDING." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 06 (2009): 1053–81. http://dx.doi.org/10.1142/s021800140900748x.

Full text
Abstract:
Graphs provide us with a powerful and flexible representation formalism for pattern classification. Many classification algorithms have been proposed in the literature. However, the vast majority of these algorithms rely on vectorial data descriptions and cannot directly be applied to graphs. Recently, a growing interest in graph kernel methods can be observed. Graph kernels aim at bridging the gap between the high representational power and flexibility of graphs and the large amount of algorithms available for object representations in terms of feature vectors. In the present paper, we propos
APA, Harvard, Vancouver, ISO, and other styles
7

Ji, Jiayi, Yunpeng Luo, Xiaoshuai Sun, et al. "Improving Image Captioning by Leveraging Intra- and Inter-layer Global Representation in Transformer Network." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (2021): 1655–63. http://dx.doi.org/10.1609/aaai.v35i2.16258.

Full text
Abstract:
Transformer-based architectures have shown great success in image captioning, where object regions are encoded and then attended into the vectorial representations to guide the caption decoding. However, such vectorial representations only contain region-level information without considering the global information reflecting the entire image, which fails to expand the capability of complex multi-modal reasoning in image captioning. In this paper, we introduce a Global Enhanced Transformer (termed GET) to enable the extraction of a more comprehensive global representation, and then adaptively g
APA, Harvard, Vancouver, ISO, and other styles
8

Zhu, Huiming, Chunhui He, Yang Fang, Bin Ge, Meng Xing, and Weidong Xiao. "Patent Automatic Classification Based on Symmetric Hierarchical Convolution Neural Network." Symmetry 12, no. 2 (2020): 186. http://dx.doi.org/10.3390/sym12020186.

Full text
Abstract:
With the rapid growth of patent applications, it has become an urgent problem to automatically classify the accepted patent application documents accurately and quickly. Most previous patent automatic classification studies are based on feature engineering and traditional machine learning methods like SVM, and some even rely on the knowledge of domain experts, hence they suffer from low accuracy problem and have poor generalization ability. In this paper, we propose a patent automatic classification method via the symmetric hierarchical convolution neural network (CNN) named PAC-HCNN. We use t
APA, Harvard, Vancouver, ISO, and other styles
9

Dutta, Anjan, Pau Riba, Josep Lladós, and Alicia Fornés. "Hierarchical stochastic graphlet embedding for graph-based pattern recognition." Neural Computing and Applications 32, no. 15 (2019): 11579–96. http://dx.doi.org/10.1007/s00521-019-04642-7.

Full text
Abstract:
AbstractDespite being very successful within the pattern recognition and machine learning community, graph-based methods are often unusable because of the lack of mathematical operations defined in graph domain. Graph embedding, which maps graphs to a vectorial space, has been proposed as a way to tackle these difficulties enabling the use of standard machine learning techniques. However, it is well known that graph embedding functions usually suffer from the loss of structural information. In this paper, we consider the hierarchical structure of a graph as a way to mitigate this loss of infor
APA, Harvard, Vancouver, ISO, and other styles
10

Szemenyei, Márton, and Ferenc Vajda. "3D Object Detection and Scene Optimization for Tangible Augmented Reality." Periodica Polytechnica Electrical Engineering and Computer Science 62, no. 2 (2018): 25–37. http://dx.doi.org/10.3311/ppee.10482.

Full text
Abstract:
Object recognition in 3D scenes is one of the fundamental tasks in computer vision. It is used frequently in robotics or augmented reality applications [1]. In our work we intend to apply 3D shape recognition to create a Tangible Augmented Reality system that is able to pair virtual and real objects in natural indoors scenes. In this paper we present a method for arranging virtual objects in a real-world scene based on primitive shape graphs. For our scheme, we propose a graph node embedding algorithm for graphs with vectorial nodes and edges, and genetic operators designed to improve the qual
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Vectorial embeddings"

1

Cvetkov-Iliev, Alexis. "Embedding models for relational data analytics." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG004.

Full text
Abstract:
L'analyse de données, par exemple via des modèles d'apprentissage automatique, requiert généralement qu'elles soient regroupées en une table unique décrivant les entités analysées par un nombre fixe d'attributs ou features. En pratique cependant, la plupart des jeux de données sont relationnels (cf. bases de données relationnelles et graphes de connaissance), où l'information sur les entités d'intérêt est irrégulière et dispersée à travers plusieurs sources. Pour analyser de telles données, il est alors nécessaire de les assembler dans une structure unique (généralement une table), ce qui dema
APA, Harvard, Vancouver, ISO, and other styles
2

Chinea, Ríos Mara. "Advanced techniques for domain adaptation in Statistical Machine Translation." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/117611.

Full text
Abstract:
[ES] La Traducción Automática Estadística es un sup-campo de la lingüística computacional que investiga como emplear los ordenadores en el proceso de traducción de un texto de un lenguaje humano a otro. La traducción automática estadística es el enfoque más popular que se emplea para construir estos sistemas de traducción automáticos. La calidad de dichos sistemas depende en gran medida de los ejemplos de traducción que se emplean durante los procesos de entrenamiento y adaptación de los modelos. Los conjuntos de datos empleados son obtenidos a partir de una gran variedad de fuentes y en mucho
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Vectorial embeddings"

1

Guo, Yi, Junbin Gao, and Paul W. Kwan. "Regularized Kernel Local Linear Embedding on Dimensionality Reduction for Non-vectorial Data." In AI 2009: Advances in Artificial Intelligence. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10439-8_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yona, Golan. "Embedding Algorithms and Vectorial Representations." In Introduction to Computational Proteomics. Chapman and Hall/CRC, 2010. http://dx.doi.org/10.1201/9781420010770-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nejadgholi Isar, Bougueng Renaud, and Witherspoon Samuel. "A Semi-Supervised Training Method for Semantic Search of Legal Facts in Canadian Immigration Cases." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2017. https://doi.org/10.3233/978-1-61499-838-9-125.

Full text
Abstract:
A semi-supervised approach was introduced to develop a semantic search system, capable of finding legal cases whose fact-asserting sentences are similar to a given query, in a large legal corpus. First, an unsupervised word embedding model learns the meaning of legal words from a large immigration law corpus. Then this knowledge is used to initiate the training of a fact detecting classifier with a small set of annotated legal cases. We achieved 90% accuracy in detecting fact sentences, where only 150 annotated documents were available. The hidden layer of the trained classifier is used to vec
APA, Harvard, Vancouver, ISO, and other styles
4

de Hoop Adrianus T. "Array-structure theory of Maxwell wavefields in affine (3 + 1)-spacetime: An overview." In Pulsed Electromagnetic Fields: Their Potentialities, Computation and Evaluation. IOS Press, 2013. https://doi.org/10.3233/978-1-61499-230-1-21.

Full text
Abstract:
An array-structure theory of Maxwell wavefields in affine (3 + 1)-spacetime is presented. The structure is designed to supersede the conventional Gibbs vector calculus and Heaviside vectorial Maxwell equations formulations, deviates from the Einstein view on spacetime as having a metrical structure (with the, non-definite, Lorentz metric), and adheres to the Weyl view where spacetime is conceived as being affine in nature. In the theory, the electric field and source quantities are introduced as one-dimensional arrays and the magnetic field and source quantities as antisymmetrical two-dimensio
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Vectorial embeddings"

1

Aoun, Paulo Henrique Calado, Andre C. A. Nascimento, and Adenilton J. Da Silva. "Evaluation of Dimensionality Reduction and Truncation Techniques for Word Embeddings." In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4477.

Full text
Abstract:
The use of word embeddings is becoming very common in many Natural Language Processing tasks. Most of the time, these require computacional resources that can not be found in most part of the current mobile devices. In this work, we evaluate a combination of numeric truncation and dimensionality reduction strategies in order to obtain smaller vectorial representations without substancial losses in performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Gutierrez-Vasquez, Ximena, and Victor Mijangos. "Low-resource bilingual lexicon extraction using graph based word embeddings." In LatinX in AI at Neural Information Processing Systems Conference 2018. Journal of LatinX in AI Research, 2018. http://dx.doi.org/10.52591/lxai2018120323.

Full text
Abstract:
In this work we focus on the task of automatically extracting bilingual lexicon for the language pair Spanish-Nahuatl. This is a low-resource setting where only a small amount of parallel corpus is available. Most of the downstream methods do not work well under low-resources conditions. This is specially true for the approaches that use vectorial representations like Word2Vec. Our proposal is to construct bilingual word vectors from a graph. This graph is generated using translation pairs obtained from an unsupervised word alignment method. We show that, in a low-resource setting, these type
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Yi, Junbin Gao, and Paul Kwan. "Visualization of Non-vectorial Data Using Twin Kernel Embedding." In 2006 International Workshop on Integrating AI and Data Mining. IEEE, 2006. http://dx.doi.org/10.1109/aidm.2006.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Yi, Junbin Gao, and Paul W. Kwan. "Learning Out-Of Sample Mapping in Non-Vectorial Data Reduction using Constrained Twin Kernel Embedding." In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

K R, Sushmitha, Rangalakshmi G R, and Suguna A. "Sentiment Analysis of Incoming Voice Calls." In International Conference on Recent Trends in Computing & Communication Technologies (ICRCCT’2K24). International Journal of Advanced Trends in Engineering and Management, 2024. http://dx.doi.org/10.59544/bisl3666/icrcct24p19.

Full text
Abstract:
This project aims to meet the increasing need for real time sentiment analysis within voice call interactions, acknowledging the rising significance of voice based engagements in today’s telecommunications realm. For instance, pre trained word embeddings, such as Word2Vec, Glove, and bidirectional encoder representations from transformers (BERT), generate vectors by considering word distances, similarities, and occurrences ignoring other aspects such as word sentiment orientation. Aiming at such limitations, this paper presents a sentiment classification model (named LeBERT) combining sentimen
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!