Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Embedding techniques.

Статті в журналах з теми "Embedding techniques"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Embedding techniques".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Duong, Chi Thang, Trung Dung Hoang, Hongzhi Yin, Matthias Weidlich, Quoc Viet Hung Nguyen, and Karl Aberer. "Scalable robust graph embedding with Spark." Proceedings of the VLDB Endowment 15, no. 4 (December 2021): 914–22. http://dx.doi.org/10.14778/3503585.3503599.

Повний текст джерела
Анотація:
Graph embedding aims at learning a vector-based representation of vertices that incorporates the structure of the graph. This representation then enables inference of graph properties. Existing graph embedding techniques, however, do not scale well to large graphs. While several techniques to scale graph embedding using compute clusters have been proposed, they require continuous communication between the compute nodes and cannot handle node failure. We therefore propose a framework for scalable and robust graph embedding based on the MapReduce model, which can distribute any existing embedding technique. Our method splits a graph into subgraphs to learn their embeddings in isolation and subsequently reconciles the embedding spaces derived for the subgraphs. We realize this idea through a novel distributed graph decomposition algorithm. In addition, we show how to implement our framework in Spark to enable efficient learning of effective embeddings. Experimental results illustrate that our approach scales well, while largely maintaining the embedding quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Li, Pandeng, Yan Li, Hongtao Xie, and Lei Zhang. "Neighborhood-Adaptive Structure Augmented Metric Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 1367–75. http://dx.doi.org/10.1609/aaai.v36i2.20025.

Повний текст джерела
Анотація:
Most metric learning techniques typically focus on sample embedding learning, while implicitly assume a homogeneous local neighborhood around each sample, based on the metrics used in training ( e.g., hypersphere for Euclidean distance or unit hyperspherical crown for cosine distance). As real-world data often lies on a low-dimensional manifold curved in a high-dimensional space, it is unlikely that everywhere of the manifold shares the same local structures in the input space. Besides, considering the non-linearity of neural networks, the local structure in the output embedding space may not be homogeneous as assumed. Therefore, representing each sample simply with its embedding while ignoring its individual neighborhood structure would have limitations in Embedding-Based Retrieval (EBR). By exploiting the heterogeneity of local structures in the embedding space, we propose a Neighborhood-Adaptive Structure Augmented metric learning framework (NASA), where the neighborhood structure is realized as a structure embedding, and learned along with the sample embedding in a self-supervised manner. In this way, without any modifications, most indexing techniques can be used to support large-scale EBR with NASA embeddings. Experiments on six standard benchmarks with two kinds of embeddings, i.e., binary embeddings and real-valued embeddings, show that our method significantly improves and outperforms the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mao, Yuqing, and Kin Wah Fung. "Use of word and graph embedding to measure semantic relatedness between Unified Medical Language System concepts." Journal of the American Medical Informatics Association 27, no. 10 (October 1, 2020): 1538–46. http://dx.doi.org/10.1093/jamia/ocaa136.

Повний текст джерела
Анотація:
Abstract Objective The study sought to explore the use of deep learning techniques to measure the semantic relatedness between Unified Medical Language System (UMLS) concepts. Materials and Methods Concept sentence embeddings were generated for UMLS concepts by applying the word embedding models BioWordVec and various flavors of BERT to concept sentences formed by concatenating UMLS terms. Graph embeddings were generated by the graph convolutional networks and 4 knowledge graph embedding models, using graphs built from UMLS hierarchical relations. Semantic relatedness was measured by the cosine between the concepts’ embedding vectors. Performance was compared with 2 traditional path-based (shortest path and Leacock-Chodorow) measurements and the publicly available concept embeddings, cui2vec, generated from large biomedical corpora. The concept sentence embeddings were also evaluated on a word sense disambiguation (WSD) task. Reference standards used included the semantic relatedness and semantic similarity datasets from the University of Minnesota, concept pairs generated from the Standardized MedDRA Queries and the MeSH (Medical Subject Headings) WSD corpus. Results Sentence embeddings generated by BioWordVec outperformed all other methods used individually in semantic relatedness measurements. Graph convolutional network graph embedding uniformly outperformed path-based measurements and was better than some word embeddings for the Standardized MedDRA Queries dataset. When used together, combined word and graph embedding achieved the best performance in all datasets. For WSD, the enhanced versions of BERT outperformed BioWordVec. Conclusions Word and graph embedding techniques can be used to harness terms and relations in the UMLS to measure semantic relatedness between concepts. Concept sentence embedding outperforms path-based measurements and cui2vec, and can be further enhanced by combining with graph embedding.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

SAMANTA, SAURAV. "NONCOMMUTATIVITY FROM EMBEDDING TECHNIQUES." Modern Physics Letters A 21, no. 08 (March 14, 2006): 675–89. http://dx.doi.org/10.1142/s0217732306019037.

Повний текст джерела
Анотація:
We apply the embedding method of Batalin–Tyutin for revealing noncommutative structures in the generalized Landau problem. Different types of noncommutativity follow from different gauge choices. This establishes a duality among the distinct algebras. An alternative approach is discussed which yields equivalent results as the embedding method. We also discuss the consequences in the Landau problem for a non-constant magnetic field.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tan, Eugene, Shannon Algar, Débora Corrêa, Michael Small, Thomas Stemler, and David Walker. "Selecting embedding delays: An overview of embedding techniques and a new method using persistent homology." Chaos: An Interdisciplinary Journal of Nonlinear Science 33, no. 3 (March 2023): 032101. http://dx.doi.org/10.1063/5.0137223.

Повний текст джерела
Анотація:
Delay embedding methods are a staple tool in the field of time series analysis and prediction. However, the selection of embedding parameters can have a big impact on the resulting analysis. This has led to the creation of a large number of methods to optimize the selection of parameters such as embedding lag. This paper aims to provide a comprehensive overview of the fundamentals of embedding theory for readers who are new to the subject. We outline a collection of existing methods for selecting embedding lag in both uniform and non-uniform delay embedding cases. Highlighting the poor dynamical explainability of existing methods of selecting non-uniform lags, we provide an alternative method of selecting embedding lags that includes a mixture of both dynamical and topological arguments. The proposed method, Significant Times on Persistent Strands (SToPS), uses persistent homology to construct a characteristic time spectrum that quantifies the relative dynamical significance of each time lag. We test our method on periodic, chaotic, and fast-slow time series and find that our method performs similar to existing automated non-uniform embedding methods. Additionally, [Formula: see text]-step predictors trained on embeddings constructed with SToPS were found to outperform other embedding methods when predicting fast-slow time series.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Liang, Jiongqian, Saket Gurukar, and Srinivasan Parthasarathy. "MILE: A Multi-Level Framework for Scalable Graph Embedding." Proceedings of the International AAAI Conference on Web and Social Media 15 (May 22, 2021): 361–72. http://dx.doi.org/10.1609/icwsm.v15i1.18067.

Повний текст джерела
Анотація:
Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarsest graph and refines the embeddings to the original graph through a graph convolution neural network that it learns. The proposed MILE framework is agnostic to the underlying graph embedding techniques and can be applied to many existing graph embedding methods without modifying them. We employ our framework on several popular graph embedding techniques and conduct embedding for real-world graphs. Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while generating embeddings of better quality, for the task of node classification. MILE can comfortably scale to a graph with 9 million nodes and 40 million edges, on which existing methods run out of memory or take too long to compute on a modern workstation. Our code and data are publicly available with detailed instructions for adding new base embedding methods: https://github.com/jiongqian/MILE.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Moudhich, Ihab, and Abdelhadi Fennan. "Evaluating sentiment analysis and word embedding techniques on Brexit." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (March 1, 2024): 695. http://dx.doi.org/10.11591/ijai.v13.i1.pp695-702.

Повний текст джерела
Анотація:
<p>In this study, we investigate the effectiveness of pre-trained word embeddings for sentiment analysis on a real-world topic, namely Brexit. We compare the performance of several popular word embedding models such global vectors for word representation (GloVe), FastText, word to vec (word2vec), and embeddings from language models (ELMo) on a dataset of tweets related to Brexit and evaluate their ability to classify the sentiment of the tweets as positive, negative, or neutral. We find that pre-trained word embeddings provide useful features for sentiment analysis and can significantly improve the performance of machine learning models. We also discuss the challenges and limitations of applying these models to complex, real-world texts such as those related to Brexit.</p><p> </p>
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhou, Jingya, Ling Liu, Wenqi Wei, and Jianxi Fan. "Network Representation Learning: From Preprocessing, Feature Extraction to Node Embedding." ACM Computing Surveys 55, no. 2 (March 31, 2023): 1–35. http://dx.doi.org/10.1145/3491206.

Повний текст джерела
Анотація:
Network representation learning (NRL) advances the conventional graph mining of social networks, knowledge graphs, and complex biomedical and physics information networks. Dozens of NRL algorithms have been reported in the literature. Most of them focus on learning node embeddings for homogeneous networks, but they differ in the specific encoding schemes and specific types of node semantics captured and used for learning node embedding. This article reviews the design principles and the different node embedding techniques for NRL over homogeneous networks. To facilitate the comparison of different node embedding algorithms, we introduce a unified reference framework to divide and generalize the node embedding learning process on a given network into preprocessing steps, node feature extraction steps, and node embedding model training for an NRL task such as link prediction and node clustering. With this unifying reference framework, we highlight the representative methods, models, and techniques used at different stages of the node embedding model learning process. This survey not only helps researchers and practitioners gain an in-depth understanding of different NRL techniques but also provides practical guidelines for designing and developing the next generation of NRL algorithms and systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Goel, Mukta, and Rohit Goel. "Comparative Analysis of Hybrid Transform Domain Image Steganography Embedding Techniques." International Journal of Scientific Research 2, no. 2 (June 1, 2012): 388–90. http://dx.doi.org/10.15373/22778179/feb2013/131.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Srinidhi, K., T. L.S Tejaswi, CH Rama Rupesh Kumar, and I. Sai Siva Charan. "An Advanced Sentiment Embeddings with Applications to Sentiment Based Result Analysis." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 393. http://dx.doi.org/10.14419/ijet.v7i2.32.15721.

Повний текст джерела
Анотація:
We propose an advanced well-trained sentiment analysis based adoptive analysis “word specific embedding’s, dubbed sentiment embedding’s”. Using available word and phrase embedded learning and trained algorithms mainly make use of contexts of terms but ignore the sentiment of texts and analyzing the process of word and text classifications. sentimental analysis on unlike words conveying same meaning matched to corresponding word vector. This problem is bridged by combining encoding opinion carrying text with sentiment embeddings words. But performing sentimental analysis on e-commerce, social networking sites we developed neural network based algorithms along with tailoring and loss function which carry feelings. This research apply embedding’s to word-level, sentence-level sentimental analysis and classification, constructing sentiment oriented lexicons. Experimental analysis and results addresses that sentiment embedding techniques outperform the context-based embedding’s on many distributed data sets. This work provides familiarity about neural networks techniques for learning word embedding’s in other NLP tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Sabbeh, Sahar F., and Heba A. Fasihuddin. "A Comparative Analysis of Word Embedding and Deep Learning for Arabic Sentiment Classification." Electronics 12, no. 6 (March 16, 2023): 1425. http://dx.doi.org/10.3390/electronics12061425.

Повний текст джерела
Анотація:
Sentiment analysis on social media platforms (i.e., Twitter or Facebook) has become an important tool to learn about users’ opinions and preferences. However, the accuracy of sentiment analysis is disrupted by the challenges of natural language processing (NLP). Recently, deep learning models have proved superior performance over statistical- and lexical-based approaches in NLP-related tasks. Word embedding is an important layer of deep learning models to generate input features. Many word embedding models have been presented for text representation of both classic and context-based word embeddings. In this paper, we present a comparative analysis to evaluate both classic and contextualized word embeddings for sentiment analysis. The four most frequently used word embedding techniques were used in their trained and pre-trained versions. The selected embedding represents classical and contextualized techniques. Classical word embedding includes algorithms such as GloVe, Word2vec, and FastText. By contrast, ARBERT is used as a contextualized embedding model. Since word embedding is more typically employed as the input layer in deep networks, we used deep learning architectures BiLSTM and CNN for sentiment classification. To achieve these goals, the experiments were applied to a series of benchmark datasets: HARD, Khooli, AJGT, ArSAS, and ASTD. Finally, a comparative analysis was conducted on the results obtained for the experimented models. Our outcomes indicate that, generally, generated embedding by one technique achieves higher performance than its pretrained version for the same technique by around 0.28 to 1.8% accuracy, 0.33 to 2.17% precision, and 0.44 to 2% recall. Moreover, the contextualized transformer-based embedding model BERT achieved the highest performance in its pretrained and trained versions. Additionally, the results indicate that BiLSTM outperforms CNN by approximately 2% in 3 datasets, HARD, Khooli, and ArSAS, while CNN achieved around 2% higher performance in the smaller datasets, AJGT and ASTD.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Ravindran, Renjith P., and Kavi Narayana Murthy. "Syntactic Coherence in Word Embedding Spaces." International Journal of Semantic Computing 15, no. 02 (June 2021): 263–90. http://dx.doi.org/10.1142/s1793351x21500057.

Повний текст джерела
Анотація:
Word embeddings have recently become a vital part of many Natural Language Processing (NLP) systems. Word embeddings are a suite of techniques that represent words in a language as vectors in an n-dimensional real space that has been shown to encode a significant amount of syntactic and semantic information. When used in NLP systems, these representations have resulted in improved performance across a wide range of NLP tasks. However, it is not clear how syntactic properties interact with the more widely studied semantic properties of words. Or what the main factors in the modeling formulation are that encourages embedding spaces to pick up more of syntactic behavior as opposed to semantic behavior of words. We investigate several aspects of word embedding spaces and modeling assumptions that maximize syntactic coherence — the degree to which words with similar syntactic properties form distinct neighborhoods in the embedding space. We do so in order to understand which of the existing models maximize syntactic coherence making it a more reliable source for extracting syntactic category (POS) information. Our analysis shows that syntactic coherence of S-CODE is superior to the other more popular and more recent embedding techniques such as Word2vec, fastText, GloVe and LexVec, when measured under compatible parameter settings. Our investigation also gives deeper insights into the geometry of the embedding space with respect to syntactic coherence, and how this is influenced by context size, frequency of words, and dimensionality of the embedding space.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Sun, Yaozhu, Utkarsh Dhandhania, and Bruno C. d. S. Oliveira. "Compositional embeddings of domain-specific languages." Proceedings of the ACM on Programming Languages 6, OOPSLA2 (October 31, 2022): 175–203. http://dx.doi.org/10.1145/3563294.

Повний текст джерела
Анотація:
A common approach to defining domain-specific languages (DSLs) is via a direct embedding into a host language. There are several well-known techniques to do such embeddings, including shallow and deep embeddings. However, such embeddings come with various trade-offs in existing programming languages. Owing to such trade-offs, many embedded DSLs end up using a mix of approaches in practice, requiring a substantial amount of code, as well as some advanced coding techniques. In this paper, we show that the recently proposed Compositional Programming paradigm and the CP language provide improved support for embedded DSLs. In CP we obtain a new form of embedding, which we call a compositional embedding, that has most of the advantages of both shallow and deep embeddings. On the one hand, compositional embeddings enable various forms of linguistic reuse that are characteristic of shallow embeddings, including the ability to reuse host-language optimizations in the DSL and add new DSL constructs easily. On the other hand, similarly to deep embeddings, compositional embeddings support definitions by pattern matching or dynamic dispatching (including dependent interpretations, transformations, and optimizations) over the abstract syntax of the DSL and have the ability to add new interpretations. We illustrate an instance of compositional embeddings with a DSL for document authoring called ExT. The DSL is highly flexible and extensible, allowing users to create various non-trivial extensions easily. For instance, ExT supports various extensions that enable the production of wiki-like documents, LaTeX documents, vector graphics or charts. The viability of compositional embeddings for ExT is evaluated with three applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Susanty, Meredita, and Sahrul Sukardi. "Perbandingan Pre-trained Word Embedding dan Embedding Layer untuk Named-Entity Recognition Bahasa Indonesia." Petir 14, no. 2 (September 2, 2021): 247–57. http://dx.doi.org/10.33322/petir.v14i2.1164.

Повний текст джерела
Анотація:
Named-Entity Recognition (NER) is used to extract information from text by identifying entities such as the name of the person, organization, location, time, and other entities. Recently, machine learning approaches, particularly deep-learning, are widely used to recognize patterns of entities in sentences. Embedding, a process to convert text data into a number or vector of numbers, translates high dimensional vectors into relatively low-dimensional space. Embeddings make it easier to do machine learning on large inputs like sparse vectors representing words. The embedding process can be performed using the supervised learning method, which requires a large number of labeled data sets or an unsupervised learning approach. This study compares the two embedding methods; trainable embedding layer (supervised learning) and pre-trained word embedding (unsupervised learning). The trainable embedding layer uses the embedding layer provided by the Keras library while pre-trained word embedding uses word2vec, GloVe, and fastText to build NER using the BiLSTM architecture. The results show that GloVe had better performance than other embedding techniques with a micro average f1 score of 76.48.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Cheng, Weiyu, Yanyan Shen, Linpeng Huang, and Yanmin Zhu. "Dual-Embedding based Deep Latent Factor Models for Recommendation." ACM Transactions on Knowledge Discovery from Data 15, no. 5 (June 26, 2021): 1–24. http://dx.doi.org/10.1145/3447395.

Повний текст джерела
Анотація:
Among various recommendation methods, latent factor models are usually considered to be state-of-the-art techniques, which aim to learn user and item embeddings for predicting user-item preferences. When applying latent factor models to the recommendation with implicit feedback, the quality of embeddings always suffers from inadequate positive feedback and noisy negative feedback. Inspired by the idea of NSVD that represents users based on their interacted items, this article proposes a dual-embedding based deep latent factor method for recommendation with implicit feedback. In addition to learning a primitive embedding for a user (resp. item), we represent each user (resp. item) with an additional embedding from the perspective of the interacted items (resp. users) and propose attentive neural methods to discriminate the importance of interacted users/items for dual-embedding learning. We design two dual-embedding based deep latent factor models, DELF and DESEQ, for pure collaborative filtering and temporal collaborative filtering (i.e., sequential recommendation), respectively. The novel attempt of the proposed models is to capture each user-item interaction with four deep representations that are subtly fused for preference prediction. We conducted extensive experiments on four real-world datasets. The results verify the effectiveness of user/item dual embeddings and the superior performance of our methods on item recommendation.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Barros, Claudio D. T., Matheus R. F. Mendonça, Alex B. Vieira, and Artur Ziviani. "A Survey on Embedding Dynamic Graphs." ACM Computing Surveys 55, no. 1 (January 31, 2023): 1–37. http://dx.doi.org/10.1145/3483595.

Повний текст джерела
Анотація:
Embedding static graphs in low-dimensional vector spaces plays a key role in network analytics and inference, supporting applications like node classification, link prediction, and graph visualization. However, many real-world networks present dynamic behavior, including topological evolution, feature evolution, and diffusion. Therefore, several methods for embedding dynamic graphs have been proposed to learn network representations over time, facing novel challenges, such as time-domain modeling, temporal features to be captured, and the temporal granularity to be embedded. In this survey, we overview dynamic graph embedding, discussing its fundamentals and the recent advances developed so far. We introduce the formal definition of dynamic graph embedding, focusing on the problem setting and introducing a novel taxonomy for dynamic graph embedding input and output. We further explore different dynamic behaviors that may be encompassed by embeddings, classifying by topological evolution, feature evolution, and processes on networks. Afterward, we describe existing techniques and propose a taxonomy for dynamic graph embedding techniques based on algorithmic approaches, from matrix and tensor factorization to deep learning, random walks, and temporal point processes. We also elucidate main applications, including dynamic link prediction, anomaly detection, and diffusion prediction, and we further state some promising research directions in the area.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

David, Merlin Susan, and Shini Renjith. "Comparison of word embeddings in text classification based on RNN and CNN." IOP Conference Series: Materials Science and Engineering 1187, no. 1 (September 1, 2021): 012029. http://dx.doi.org/10.1088/1757-899x/1187/1/012029.

Повний текст джерела
Анотація:
Abstract This paper presents a comparison of word embeddings in text classification using RNN and CNN. In the field of image classification, deep learning methods like as RNN and CNN have shown to be popular. CNN is most popular model among deep learning techniques in the field of NLP because of its simplicity and parallelism, even if the dataset is huge. Word embedding techniques employed are GloVe and fastText. Use of different word embeddings showed a major difference in the accuracy of the models. When it comes to embedding of rare words, GloVe can sometime perform poorly. Inorder to tackle this issue, fastText method is used. Deep neural networks with fastText showed a remarkable improvement in the accuracy than GloVe. But fastText took some time to train when compared to GloVe. Further, the accuracy was improved by minimizing the batch size. Finally we concluded that the word embeddings have a huge impact on the performance of text classification models
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Vadalà, Valeria, Gustavo Avolio, Antonio Raffo, Dominique M. M. P. Schreurs, and Giorgio Vannini. "Nonlinear embedding and de-embedding techniques for large-signal fet measurements." Microwave and Optical Technology Letters 54, no. 12 (September 25, 2012): 2835–38. http://dx.doi.org/10.1002/mop.27169.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Levy, Ronnie, and M. D. Rice. "Techniques and examples in U-embedding." Topology and its Applications 22, no. 2 (March 1986): 157–74. http://dx.doi.org/10.1016/0166-8641(86)90006-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Thodi, Diljith M., and Jeffrey J. Rodriguez. "Expansion Embedding Techniques for Reversible Watermarking." IEEE Transactions on Image Processing 16, no. 3 (March 2007): 721–30. http://dx.doi.org/10.1109/tip.2006.891046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Song, J. M., F. Ling, W. Blood, E. Demircan, K. Sriram, G. Flynn, K. H. To, et al. "De-embedding techniques for embedded microstrips." Microwave and Optical Technology Letters 42, no. 1 (2004): 50–54. http://dx.doi.org/10.1002/mop.20204.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Takehara, Daisuke, and Kei Kobayashi. "Representing Hierarchical Structured Data Using Cone Embedding." Mathematics 11, no. 10 (May 15, 2023): 2294. http://dx.doi.org/10.3390/math11102294.

Повний текст джерела
Анотація:
Extracting hierarchical structure in graph data is becoming an important problem in fields such as natural language processing and developmental biology. Hierarchical structures can be extracted by embedding methods in non-Euclidean spaces, such as Poincaré embedding and Lorentz embedding, and it is now possible to learn efficient embedding by taking advantage of the structure of these spaces. In this study, we propose embedding into another type of metric space called a metric cone by learning an only one-dimensional coordinate variable added to the original vector space or a pre-trained embedding space. This allows for the extraction of hierarchical information while maintaining the properties of the pre-trained embedding. The metric cone is a one-dimensional extension of the original metric space and has the advantage that the curvature of the space can be easily adjusted by a parameter even when the coordinates of the original space are fixed. Through an extensive empirical evaluation we have corroborated the effectiveness of the proposed cone embedding model. In the case of randomly generated trees, cone embedding demonstrated superior performance in extracting hierarchical structures compared to existing techniques, particularly in high-dimensional settings. For WordNet embeddings, cone embedding exhibited a noteworthy correlation between the extracted hierarchical structures and human evaluation outcomes.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Jin, Junchen, Mark Heimann, Di Jin, and Danai Koutra. "Toward Understanding and Evaluating Structural Node Embeddings." ACM Transactions on Knowledge Discovery from Data 16, no. 3 (June 30, 2022): 1–32. http://dx.doi.org/10.1145/3481639.

Повний текст джерела
Анотація:
While most network embedding techniques model the proximity between nodes in a network, recently there has been significant interest in structural embeddings that are based on node equivalences , a notion rooted in sociology: equivalences or positions are collections of nodes that have similar roles—i.e., similar functions, ties or interactions with nodes in other positions—irrespective of their distance or reachability in the network. Unlike the proximity-based methods that are rigorously evaluated in the literature, the evaluation of structural embeddings is less mature. It relies on small synthetic or real networks with labels that are not perfectly defined, and its connection to sociological equivalences has hitherto been vague and tenuous. With new node embedding methods being developed at a breakneck pace, proper evaluation, and systematic characterization of existing approaches will be essential to progress. To fill in this gap, we set out to understand what types of equivalences structural embeddings capture. We are the first to contribute rigorous intrinsic and extrinsic evaluation methodology for structural embeddings, along with carefully-designed, diverse datasets of varying sizes. We observe a number of different evaluation variables that can lead to different results (e.g., choice of similarity measure, classifier, and label definitions). We find that degree distributions within nodes’ local neighborhoods can lead to simple yet effective baselines in their own right and guide the future development of structural embedding. We hope that our findings can influence the design of further node embedding methods and also pave the way for more comprehensive and fair evaluation of structural embedding methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Prokhorov, Victor, Mohammad Taher Pilehvar, Dimitri Kartsaklis, Pietro Lio, and Nigel Collier. "Unseen Word Representation by Aligning Heterogeneous Lexical Semantic Spaces." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6900–6907. http://dx.doi.org/10.1609/aaai.v33i01.33016900.

Повний текст джерела
Анотація:
Word embedding techniques heavily rely on the abundance of training data for individual words. Given the Zipfian distribution of words in natural language texts, a large number of words do not usually appear frequently or at all in the training data. In this paper we put forward a technique that exploits the knowledge encoded in lexical resources, such as WordNet, to induce embeddings for unseen words. Our approach adapts graph embedding and cross-lingual vector space transformation techniques in order to merge lexical knowledge encoded in ontologies with that derived from corpus statistics. We show that the approach can provide consistent performance improvements across multiple evaluation benchmarks: in-vitro, on multiple rare word similarity datasets, and invivo, in two downstream text classification tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Angerer, Philipp, David S. Fischer, Fabian J. Theis, Antonio Scialdone, and Carsten Marr. "Automatic identification of relevant genes from low-dimensional embeddings of single-cell RNA-seq data." Bioinformatics 36, no. 15 (March 24, 2020): 4291–95. http://dx.doi.org/10.1093/bioinformatics/btaa198.

Повний текст джерела
Анотація:
Abstract Motivation Dimensionality reduction is a key step in the analysis of single-cell RNA-sequencing data. It produces a low-dimensional embedding for visualization and as a calculation base for downstream analysis. Nonlinear techniques are most suitable to handle the intrinsic complexity of large, heterogeneous single-cell data. However, with no linear relation between gene and embedding coordinate, there is no way to extract the identity of genes driving any cell’s position in the low-dimensional embedding, making it difficult to characterize the underlying biological processes. Results In this article, we introduce the concepts of local and global gene relevance to compute an equivalent of principal component analysis loadings for non-linear low-dimensional embeddings. Global gene relevance identifies drivers of the overall embedding, while local gene relevance identifies those of a defined sub-region. We apply our method to single-cell RNA-seq datasets from different experimental protocols and to different low-dimensional embedding techniques. This shows our method’s versatility to identify key genes for a variety of biological processes. Availability and implementation To ensure reproducibility and ease of use, our method is released as part of destiny 3.0, a popular R package for building diffusion maps from single-cell transcriptomic data. It is readily available through Bioconductor. Supplementary information Supplementary data are available at Bioinformatics online.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

P. Bhopale, Bhopale, and Ashish Tiwari. "LEVERAGING NEURAL NETWORK PHRASE EMBEDDING MODEL FOR QUERY REFORMULATION IN AD-HOC BIOMEDICAL INFORMATION RETRIEVAL." Malaysian Journal of Computer Science 34, no. 2 (April 30, 2021): 151–70. http://dx.doi.org/10.22452/mjcs.vol34no2.2.

Повний текст джерела
Анотація:
This study presents a spark enhanced neural network phrase embedding model to leverage query representation for relevant biomedical literature retrieval. Information retrieval for clinical decision support demands high precision. In recent years, word embeddings have been evolved as a solution to such requirements. It represents vocabulary words in low-dimensional vectors in the context of their similar words; however, it is inadequate to deal with semantic phrases or multi-word units. Learning vector embeddings for phrases by maintaining word meanings is a challenging task. This study proposes a scalable phrase embedding technique to embed multi-word units into vector representations using a state-of-the-art word embedding technique, keeping both word and phrase in the same vectors space. It will enhance the effectiveness and efficiency of query language models by expanding unseen query terms and phrases for the semantically associated query terms. Embedding vectors are evaluated via a query expansion technique for ad-hoc retrieval task over two benchmark corpora viz. TREC-CDS 2014 collection with 733,138 PubMed articles and OHSUMED corpus having 348,566 articles collected from a Medline database. The results show that the proposed technique has significantly outperformed other state-of-the-art retrieval techniques
Стилі APA, Harvard, Vancouver, ISO та ін.
27

N, Nagendra, and Chandra J. "A Systematic Review on Features Extraction Techniques for Aspect Based Text Classification Using Artificial Intelligence." ECS Transactions 107, no. 1 (April 24, 2022): 2503–14. http://dx.doi.org/10.1149/10701.2503ecst.

Повний текст джерела
Анотація:
Aspect extraction is an important and challenging and meaningful task in aspect-based text classification analysis. To apply variants of topic models on task, while reasonably successful, these methods usually do not produce highly coherent aspects. In this review, present a novel neural/cognitive approach to discover coherent aspects methods. Exploiting the distribution of word co-occurrences through neural/cognitive word embeddings. Unlike topics that typically assume independently generated words, word embedding models encourage words that appear in similar factors close to each other in the embedding space. Also, use an attention mechanism to de-emphasize irrelevant words during training, further improving aspects coherence. Methods results on datasets demonstrate that approach discovers more meaningful and coherent aspects and substantially outperforms baseline. Aspect-based text analysis aims to determine people's attitudes towards different aspects in a review.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Moudhich, Ihab, and Abdelhadi Fennan. "Graph embedding approach to analyze sentiments on cryptocurrency." International Journal of Electrical and Computer Engineering (IJECE) 14, no. 1 (February 1, 2024): 690. http://dx.doi.org/10.11591/ijece.v14i1.pp690-697.

Повний текст джерела
Анотація:
This paper presents a comprehensive exploration of graph embedding techniques for sentiment analysis. The objective of this study is to enhance the accuracy of sentiment analysis models by leveraging the rich contextual relationships between words in text data. We investigate the application of graph embedding in the context of sentiment analysis, focusing on it is effectiveness in capturing the semantic and syntactic information of text. By representing text as a graph and employing graph embedding techniques, we aim to extract meaningful insights and improve the performance of sentiment analysis models. To achieve our goal, we conduct a thorough comparison of graph embedding with traditional word embedding and simple embedding layers. Our experiments demonstrate that the graph embedding model outperforms these conventional models in terms of accuracy, highlighting it is potential for sentiment analysis tasks. Furthermore, we address two limitations of graph embedding techniques: handling out-of-vocabulary words and incorporating sentiment shift over time. The findings of this study emphasize the significance of graph embedding techniques in sentiment analysis, offering valuable insights into sentiment analysis within various domains. The results suggest that graph embedding can capture intricate relationships between words, enabling a more nuanced understanding of the sentiment expressed in text data.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Newman, G. R., and J. A. Hobot. "Modern acrylics for post-embedding immunostaining techniques." Journal of Histochemistry & Cytochemistry 35, no. 9 (September 1987): 971–81. http://dx.doi.org/10.1177/35.9.3302021.

Повний текст джерела
Анотація:
We describe two methods for rapid processing of biological tissues into LR White acrylic plastic. Both methods make use of LR White's compatibility with small amounts of water, enabling non-osmicated tissue to be only partially dehydrated before infiltration with the plastic, a procedure that improves the sensitivity of post-embedding immunocytochemistry. In addition, both methods are designed to reduce the time for which tissue is exposed to the damaging influence of the plastic monomer, which can cause extraction and sudden shrinkage. The tissue example used in the first method is immersion-fixed, surgically removed human pituitary which, by virtue of its thorough fixation, can be processed quickly at 50 degrees C using catalytic polymerization at room temperature. The concentration of the catalyst is critically set to prevent the temperature rising above 60 degrees C in the tissue blocks. Penetration of immunoperoxidase reagents into 330-nm LR White sections is demonstrated and possible modes of action are discussed. When "lightly" fixed tissue is processed as above, serious polymerization artifacts can result from autocatalysis. A second method, based on the first but employing slower polymerization at 0 degrees C, has therefore been developed. The high level of fine structure that can be retained using this method is illustrated by the demonstration of the trans-tubular Golgi in perfusion-fixed kidney of rat. Biotinylated lectin is localized to cells of the kidney proximal tubule with streptavidin-colloidal gold, to illustrate tissue reactivity. In a second example, the structure of the bacterial cell envelope is shown to be similar in appearance after partial dehydration and LR White embedding to that seen after progressive lowering of temperature, dehydration, and Lowicryl embedding.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Jackson, C. M. "Microwave de-embedding techniques applied to acoustics." IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control 52, no. 7 (July 2005): 1094–100. http://dx.doi.org/10.1109/tuffc.2005.1503995.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Lin, Ching-Chiuan, Shih-Chieh Chen, and Nien-Lin Hsueh. "Adaptive embedding techniques for VQ-compressed images." Information Sciences 179, no. 1-2 (January 2009): 140–49. http://dx.doi.org/10.1016/j.ins.2008.09.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Kochsiek, Adrian, and Rainer Gemulla. "Parallel training of knowledge graph embedding models." Proceedings of the VLDB Endowment 15, no. 3 (November 2021): 633–45. http://dx.doi.org/10.14778/3494124.3494144.

Повний текст джерела
Анотація:
Knowledge graph embedding (KGE) models represent the entities and relations of a knowledge graph (KG) using dense continuous representations called embeddings. KGE methods have recently gained traction for tasks such as knowledge graph completion and reasoning as well as to provide suitable entity representations for downstream learning tasks. While a large part of the available literature focuses on small KGs, a number of frameworks that are able to train KGE models for large-scale KGs by parallelization across multiple GPUs or machines have recently been proposed. So far, the benefits and drawbacks of the various parallelization techniques have not been studied comprehensively. In this paper, we report on an experimental study in which we presented, re-implemented in a common computational framework, investigated, and improved the available techniques. We found that the evaluation methodologies used in prior work are often not comparable and can be misleading, and that most of currently implemented training methods tend to have a negative impact on embedding quality. We propose a simple but effective variation of the stratification technique used by PyTorch BigGraph for mitigation. Moreover, basic random partitioning can be an effective or even the best-performing choice when combined with suitable sampling techniques. Ultimately, we found that efficient and effective parallel training of large-scale KGE models is indeed achievable but requires a careful choice of techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Jawale, Shila Sumol, and S. D. Sawarker. "Amalgamation of Embeddings With Model Explainability for Sentiment Analysis." International Journal of Applied Evolutionary Computation 13, no. 1 (January 1, 2022): 1–24. http://dx.doi.org/10.4018/ijaec.315629.

Повний текст джерела
Анотація:
Regarding the ubiquity of digitalization and electronic processing, an automated review processing system, also known as sentiment analysis, is crucial. There were many architectures and word embeddings employed for effective sentiment analysis. Deep learning is now-a-days becoming prominent for solving these problems as huge amounts of data get generated per second. In deep learning, word embedding acts as a feature representative and plays an important role. This paper proposed a novel deep learning architecture which represents hybrid embedding techniques that address polysemy, semantic and syntactic issues of a language model, along with justifying the model prediction. The model is evaluated on sentiment identification tasks, obtaining the result as F1-score 0.9254 and F1-score 0.88, for MR and Kindle dataset respectively. The proposed model outperforms many current techniques for both tasks in experiments, suggesting that combining context-free and context-dependent text representations potentially capture complementary features of word meaning. The model decisions justified with the help of visualization techniques such as t-SNE.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Satish, Preksha, Deeksha Lingraj, S. Anjan Kumar, and T. G. Keerthan Kumar. "Comparison of D-Vine and R-Vine Techniques for Virtual Network Embedding Problem." IOP Conference Series: Materials Science and Engineering 1187, no. 1 (September 1, 2021): 012035. http://dx.doi.org/10.1088/1757-899x/1187/1/012035.

Повний текст джерела
Анотація:
Abstract In Network virtualization, Virtual Network Embedding(VNE) is the process of mapping virtual nodes and links of a virtual network request(VNR) on a substrate network to fulfill the demands of the request. Embedding virtual network requests helps in achieving network virtualization efficiently. This paper presents Vineyard, a set of VN embedding algorithms namely D-Vine and R-Vine, to introduce a finer correlation between the node mapping and link mapping phases. Deterministic Virtual Network Embedding(D-Vine) algorithm is used to embed virtual nodes on to substrate network based on the capacity constraint when it is not satisfied the system takes the Randomised Virtual Network Embedding(R-Vine) algorithm to map the nodes based on location constraint. Subsequently, after the nodes are embedded the link mapping phase is done based on the distance constraint. A window-based virtual embedding algorithm(W-Vine) is also introduced to evaluate the effect of lookahead in Virtual Network Embedding. After the mapping of multiple virtual network requests, analysis is done to compare the node and CPU utilization in both the algorithms and the variations are conserved.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Rustad, Supriadi, Ignatius Moses Setiadi De Rosal, Pulung Nurtantio Andono, Abdul Syukur, and Purwanto. "Optimization of Cross Diagonal Pixel Value Differencing and Modulus Function Steganography Using Edge Area Block Patterns." Cybernetics and Information Technologies 22, no. 2 (June 1, 2022): 145–59. http://dx.doi.org/10.2478/cait-2022-0022.

Повний текст джерела
Анотація:
Abstract The existence of a trade-off between embedding capacity and imperceptibility is a challenge to improve the quality of steganographic images. This research proposes to cross diagonal embedding Pixel Value Differencing (PVD) and Modulus Function (MF) techniques using edge area patterns to improve embedding capacity and imperceptibility simultaneously. At the same time still, maintain a good quality of security. By implementing them into 14 public datasets, the proposed techniques are proven to increase both capacity and imperceptibility. The cross diagonal embedding PVD is responsible for increasing the embedding capacity reaching an average value of 3.18 bits per pixel (bpp), and at the same time, the implementation of edge area block patterns-based embedding is a solution of improving imperceptibility toward an average value of PSNR above 40 dB and that of SSIM above 0.98. Aside from its success in increasing the embedding capacity and the imperceptibility, the proposed techniques remain resistant to RS attacks.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Seshadhri, C., Aneesh Sharma, Andrew Stolman, and Ashish Goel. "The impossibility of low-rank representations for triangle-rich complex networks." Proceedings of the National Academy of Sciences 117, no. 11 (March 2, 2020): 5631–37. http://dx.doi.org/10.1073/pnas.1911030117.

Повний текст джерела
Анотація:
The study of complex networks is a significant development in modern science, and has enriched the social sciences, biology, physics, and computer science. Models and algorithms for such networks are pervasive in our society, and impact human behavior via social networks, search engines, and recommender systems, to name a few. A widely used algorithmic technique for modeling such complex networks is to construct a low-dimensional Euclidean embedding of the vertices of the network, where proximity of vertices is interpreted as the likelihood of an edge. Contrary to the common view, we argue that such graph embeddings do not capture salient properties of complex networks. The two properties we focus on are low degree and large clustering coefficients, which have been widely established to be empirically true for real-world networks. We mathematically prove that any embedding (that uses dot products to measure similarity) that can successfully create these two properties must have a rank that is nearly linear in the number of vertices. Among other implications, this establishes that popular embedding techniques such as singular value decomposition and node2vec fail to capture significant structural aspects of real-world complex networks. Furthermore, we empirically study a number of different embedding techniques based on dot product, and show that they all fail to capture the triangle structure.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Güneş, Mehmet Emin, Eyüp Gemici, and Turgut Dönmez. "Comparison of Laparoscopic Embedding Technique and Other Techniques for Appendiceal Stump Closure." Turkish Journal of Colorectal Disease 29, no. 3 (September 1, 2019): 121–26. http://dx.doi.org/10.4274/tjcd.galenos.2019.78857.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Lamberton, Damien, and L. C. G. Rogers. "Optimal stopping and embedding." Journal of Applied Probability 37, no. 4 (December 2000): 1143–48. http://dx.doi.org/10.1239/jap/1014843094.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Lamberton, Damien, and L. C. G. Rogers. "Optimal stopping and embedding." Journal of Applied Probability 37, no. 04 (December 2000): 1143–48. http://dx.doi.org/10.1017/s0021900200018337.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Gupta, Richa. "An Analysis of Reversible Data Hiding Techniques in Images." Mathematical Statistician and Engineering Applications 70, no. 2 (February 26, 2021): 1549–55. http://dx.doi.org/10.17762/msea.v70i2.2444.

Повний текст джерела
Анотація:
The reversible data- hiding fashion conceals some sensitive information in an image for secure communication. Only an authorised party may decrypt the concealed communication and recreate the cover image. The strategies put forth by different experimenters haven't yet been suitable to reflect a picture with a high embedding rate and advanced reconstructed image quality. In this exploration, certain significant reversible data- caching strategies are addressed and a system is proposed that can enhance the quality of the reconstructed image, the embedding capacity, and the denoising of the image. This approach can offer a secret data transfer that's further authentic, private, and dependable. Keywords: Embedding rate, Encryption, Prediction error, PSNR, Reversible data-hiding, Steganography
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Guo, Kaifeng, and Guolei Zeng. "Graph convolutional network and self-attentive for sequential recommendation." PeerJ Computer Science 9 (December 1, 2023): e1701. http://dx.doi.org/10.7717/peerj-cs.1701.

Повний текст джерела
Анотація:
Sequential recommender systems (SRS) aim to provide personalized recommendations to users in the context of large-scale datasets and complex user behavior sequences. However, the effectiveness of most existing embedding techniques in capturing the intricate relationships between items remains suboptimal, with a significant concentration of item embedding vectors that hinder the improvement of final prediction performance. Nevertheless, our study reveals that the distribution of item embeddings can be effectively dispersed through graph interaction networks and contrastive learning. In this article, we propose a graph convolutional neural network to capture the complex relationships between users and items, leveraging the learned embedding vectors of nodes to represent items. Additionally, we employ a self-attentive sequential model to predict outcomes based on the item embedding sequences of individual users. Furthermore, we incorporate instance-wise contrastive learning (ICL) and prototype contrastive learning (PCL) during the training process to enhance the effectiveness of representation learning. Broad comparative experiments and ablation studies were conducted across four distinct datasets. The experimental outcomes clearly demonstrate the superior performance of our proposed GSASRec model.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Khosa, Saima, Arif Mehmood, and Muhammad Rizwan. "Unifying Sentence Transformer Embedding and Softmax Voting Ensemble for Accurate News Category Prediction." Computers 12, no. 7 (July 8, 2023): 137. http://dx.doi.org/10.3390/computers12070137.

Повний текст джерела
Анотація:
The study focuses on news category prediction and investigates the performance of sentence embedding of four transformer models (BERT, RoBERTa, MPNet, and T5) and their variants as feature vectors when combined with Softmax and Random Forest using two accessible news datasets from Kaggle. The data are stratified into train and test sets to ensure equal representation of each category. Word embeddings are generated using transformer models, with the last hidden layer selected as the embedding. Mean pooling calculates a single vector representation called sentence embedding, capturing the overall meaning of the news article. The performance of Softmax and Random Forest, as well as the soft voting of both, is evaluated using evaluation measures such as accuracy, F1 score, precision, and recall. The study also contributes by evaluating the performance of Softmax and Random Forest individually. The macro-average F1 score is calculated to compare the performance of different transformer embeddings in the same experimental settings. The experiments reveal that MPNet versions v1 and v3 achieve the highest F1 score of 97.7% when combined with Random Forest, while T5 Large embedding achieves the highest F1 score of 98.2% when used with Softmax regression. MPNet v1 performs exceptionally well when used in the voting classifier, obtaining an impressive F1 score of 98.6%. In conclusion, the experiments validate the superiority of certain transformer models, such as MPNet v1, MPNet v3, and DistilRoBERTa, when used to calculate sentence embeddings within the Random Forest framework. The results also highlight the promising performance of T5 Large and RoBERTa Large in voting of Softmax regression and Random Forest. The voting classifier, employing transformer embeddings and ensemble learning techniques, consistently outperforms other baselines and individual algorithms. These findings emphasize the effectiveness of the voting classifier with transformer embeddings in achieving accurate and reliable predictions for news category classification tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Chen, Xuelu, Muhao Chen, Weijia Shi, Yizhou Sun, and Carlo Zaniolo. "Embedding Uncertain Knowledge Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3363–70. http://dx.doi.org/10.1609/aaai.v33i01.33013363.

Повний текст джерела
Анотація:
Embedding models for deterministic Knowledge Graphs (KG) have been extensively studied, with the purpose of capturing latent semantic relations between entities and incorporating the structured knowledge they contain into machine learning. However, there are many KGs that model uncertain knowledge, which typically model the inherent uncertainty of relations facts with a confidence score, and embedding such uncertain knowledge represents an unresolved challenge. The capturing of uncertain knowledge will benefit many knowledge-driven applications such as question answering and semantic search by providing more natural characterization of the knowledge. In this paper, we propose a novel uncertain KG embedding model UKGE, which aims to preserve both structural and uncertainty information of relation facts in the embedding space. Unlike previous models that characterize relation facts with binary classification techniques, UKGE learns embeddings according to the confidence scores of uncertain relation facts. To further enhance the precision of UKGE, we also introduce probabilistic soft logic to infer confidence scores for unseen relation facts during training. We propose and evaluate two variants of UKGE based on different confidence score modeling strategies. Experiments are conducted on three real-world uncertain KGs via three tasks, i.e. confidence prediction, relation fact ranking, and relation fact classification. UKGE shows effectiveness in capturing uncertain knowledge by achieving promising results, and it consistently outperforms baselines on these tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Wu, Shengsen, Liang Chen, Yihang Lou, Yan Bai, Tao Bai, Minghua Deng, and Ling-Yu Duan. "Neighborhood Consensus Contrastive Learning for Backward-Compatible Representation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2722–30. http://dx.doi.org/10.1609/aaai.v36i3.20175.

Повний текст джерела
Анотація:
In object re-identification (ReID), the development of deep learning techniques often involves model updates and deployment. It is unbearable to re-embedding and re-index with the system suspended when deploying new models. Therefore, backward-compatible representation is proposed to enable ``new'' features to be compared with ``old'' features directly, which means that the database is active when there are both ``new'' and ``old'' features in it. Thus we can scroll-refresh the database or even do nothing on the database to update. The existing backward-compatible methods either require a strong overlap between old and new training data or simply conduct constraints at the instance level. Thus they are difficult in handling complicated cluster structures and are limited in eliminating the impact of outliers in old embeddings, resulting in a risk of damaging the discriminative capability of new features. In this work, we propose a Neighborhood Consensus Contrastive Learning (NCCL) method. With no assumptions about the new training data, we estimate the sub-cluster structures of old embeddings. A new embedding is constrained with multiple old embeddings in both embedding space and discrimination space at the sub-class level. The effect of outliers diminished, as the multiple samples serve as ``mean teachers''. Besides, we propose a scheme to filter the old embeddings with low credibility, further improving the compatibility robustness. Our method ensures the compatibility without impairing the accuracy of the new model. It can even improve the new model's accuracy in most scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Qiao, R., and Narayana R. Aluru. "Multiscale Simulation of Electroosmotic Transport Using Embedding Techniques." International Journal for Multiscale Computational Engineering 2, no. 2 (2004): 173–88. http://dx.doi.org/10.1615/intjmultcompeng.v2.i2.10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

He, S., and M. Wu. "Joint Coding and Embedding Techniques for Multimedia Fingerprinting." IEEE Transactions on Information Forensics and Security 1, no. 2 (June 2006): 231–47. http://dx.doi.org/10.1109/tifs.2006.873597.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Goyal, Palash, and Emilio Ferrara. "Graph embedding techniques, applications, and performance: A survey." Knowledge-Based Systems 151 (July 2018): 78–94. http://dx.doi.org/10.1016/j.knosys.2018.03.022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Chang, Chin-Chen, Wen-Chuan Wu, and Yi-Hui Chen. "Joint coding and embedding techniques for multimedia images." Information Sciences 178, no. 18 (September 2008): 3543–56. http://dx.doi.org/10.1016/j.ins.2008.05.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

NERURKAR, Pranav, Madhav CHANDANE, and Sunil BHIRUD. "Survey of network embedding techniques for social networks." TURKISH JOURNAL OF ELECTRICAL ENGINEERING & COMPUTER SCIENCES 27, no. 6 (November 26, 2019): 4768–82. http://dx.doi.org/10.3906/elk-1807-333.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Rajalakshmi, K., and K. Mahesh. "A Review on Video Compression and Embedding Techniques." International Journal of Computer Applications 141, no. 12 (May 17, 2016): 32–36. http://dx.doi.org/10.5120/ijca2016909940.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії