Статті в журналах з теми "Embedding Analysis"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Embedding Analysis.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Embedding Analysis".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lainscsek, Claudia, and Terrence J. Sejnowski. "Delay Differential Analysis of Time Series." Neural Computation 27, no. 3 (March 2015): 594–614. http://dx.doi.org/10.1162/neco_a_00706.

Повний текст джерела
Анотація:
Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same experiment are available. Together, this time-domain toolbox provides higher temporal resolution, increased frequency and phase coupling information, and it allows an easy and straightforward implementation of higher-order spectra across time compared with frequency-based methods such as the DFT and cross-spectral analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Samko, Natasha. "Embeddings of weighted generalized Morrey spaces into Lebesgue spaces on fractal sets." Fractional Calculus and Applied Analysis 22, no. 5 (October 25, 2019): 1203–24. http://dx.doi.org/10.1515/fca-2019-0064.

Повний текст джерела
Анотація:
Abstract We study embeddings of weighted local and consequently global generalized Morrey spaces defined on a quasi-metric measure set (X, d, μ) of general nature which may be unbounded, into Lebesgue spaces Ls(X), 1 ≤ s ≤ p < ∞. The main motivation for obtaining such an embedding is to have an embedding of non-separable Morrey space into a separable space. In the general setting of quasi-metric measure spaces and arbitrary weights we give a sufficient condition for such an embedding. In the case of radial weights related to the center of local Morrey space, we obtain an effective sufficient condition in terms of (fractional in general) upper Ahlfors dimensions of the set X. In the case of radial weights we also obtain necessary conditions for such embeddings of local and global Morrey spaces, with the use of (fractional in general) lower and upper Ahlfors dimensions. In the case of power-logarithmic-type weights we obtain a criterion for such embeddings when these dimensions coincide.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sabbeh, Sahar F., and Heba A. Fasihuddin. "A Comparative Analysis of Word Embedding and Deep Learning for Arabic Sentiment Classification." Electronics 12, no. 6 (March 16, 2023): 1425. http://dx.doi.org/10.3390/electronics12061425.

Повний текст джерела
Анотація:
Sentiment analysis on social media platforms (i.e., Twitter or Facebook) has become an important tool to learn about users’ opinions and preferences. However, the accuracy of sentiment analysis is disrupted by the challenges of natural language processing (NLP). Recently, deep learning models have proved superior performance over statistical- and lexical-based approaches in NLP-related tasks. Word embedding is an important layer of deep learning models to generate input features. Many word embedding models have been presented for text representation of both classic and context-based word embeddings. In this paper, we present a comparative analysis to evaluate both classic and contextualized word embeddings for sentiment analysis. The four most frequently used word embedding techniques were used in their trained and pre-trained versions. The selected embedding represents classical and contextualized techniques. Classical word embedding includes algorithms such as GloVe, Word2vec, and FastText. By contrast, ARBERT is used as a contextualized embedding model. Since word embedding is more typically employed as the input layer in deep networks, we used deep learning architectures BiLSTM and CNN for sentiment classification. To achieve these goals, the experiments were applied to a series of benchmark datasets: HARD, Khooli, AJGT, ArSAS, and ASTD. Finally, a comparative analysis was conducted on the results obtained for the experimented models. Our outcomes indicate that, generally, generated embedding by one technique achieves higher performance than its pretrained version for the same technique by around 0.28 to 1.8% accuracy, 0.33 to 2.17% precision, and 0.44 to 2% recall. Moreover, the contextualized transformer-based embedding model BERT achieved the highest performance in its pretrained and trained versions. Additionally, the results indicate that BiLSTM outperforms CNN by approximately 2% in 3 datasets, HARD, Khooli, and ArSAS, while CNN achieved around 2% higher performance in the smaller datasets, AJGT and ASTD.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

He, Hongliang, Junlei Zhang, Zhenzhong Lan, and Yue Zhang. "Instance Smoothed Contrastive Learning for Unsupervised Sentence Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 12863–71. http://dx.doi.org/10.1609/aaai.v37i11.26512.

Повний текст джерела
Анотація:
Contrastive learning-based methods, such as unsup-SimCSE, have achieved state-of-the-art (SOTA) performances in learning unsupervised sentence embeddings. However, in previous studies, each embedding used for contrastive learning only derived from one sentence instance, and we call these embeddings instance-level embeddings. In other words, each embedding is regarded as a unique class of its own, which may hurt the generalization performance. In this study, we propose IS-CSE (instance smoothing contrastive sentence embedding) to smooth the boundaries of embeddings in the feature space. Specifically, we retrieve embeddings from a dynamic memory buffer according to the semantic similarity to get a positive embedding group. Then embeddings in the group are aggregated by a self-attention operation to produce a smoothed instance embedding for further analysis. We evaluate our method on standard semantic text similarity (STS) tasks and achieve an average of 78.30%, 79.47%, 77.73%, and 79.42% Spearman’s correlation on the base of BERT-base, BERT-large, RoBERTa-base, and RoBERTa-large respectively, a 2.05%, 1.06%, 1.16% and 0.52% improvement compared to unsup-SimCSE.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Srinidhi, K., T. L.S Tejaswi, CH Rama Rupesh Kumar, and I. Sai Siva Charan. "An Advanced Sentiment Embeddings with Applications to Sentiment Based Result Analysis." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 393. http://dx.doi.org/10.14419/ijet.v7i2.32.15721.

Повний текст джерела
Анотація:
We propose an advanced well-trained sentiment analysis based adoptive analysis “word specific embedding’s, dubbed sentiment embedding’s”. Using available word and phrase embedded learning and trained algorithms mainly make use of contexts of terms but ignore the sentiment of texts and analyzing the process of word and text classifications. sentimental analysis on unlike words conveying same meaning matched to corresponding word vector. This problem is bridged by combining encoding opinion carrying text with sentiment embeddings words. But performing sentimental analysis on e-commerce, social networking sites we developed neural network based algorithms along with tailoring and loss function which carry feelings. This research apply embedding’s to word-level, sentence-level sentimental analysis and classification, constructing sentiment oriented lexicons. Experimental analysis and results addresses that sentiment embedding techniques outperform the context-based embedding’s on many distributed data sets. This work provides familiarity about neural networks techniques for learning word embedding’s in other NLP tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ruskanda, Fariska Zakhralativa, Stefanus Stanley Yoga Setiawan, Nadya Aditama, and Masayu Leylia Khodra. "Sentiment Analysis of Sentence-Level using Dependency Embedding and Pre-trained BERT Model." PIKSEL : Penelitian Ilmu Komputer Sistem Embedded and Logic 11, no. 1 (March 31, 2023): 171–80. http://dx.doi.org/10.33558/piksel.v11i1.6938.

Повний текст джерела
Анотація:
Sentiment analysis is a valuable field of research in NLP with many applications. Dependency tree is one of the language features that can be utilized in this field. Dependency embedding, as one of the semantic representations of a sentence, has shown to provide more significant results compared to other embeddings, which makes it a potential way to improve the performance of sentiment analysis tasks. This study aimed to investigate the effect of dependency embedding on sentence-level sentiment analysis through experimental research. The study replaced the Vocabulary Graph embedding in the VGCN-BERT sentiment classification system architecture with several dependency embedding representations, including word vector, context vector, average of word and context vectors, weighting on word and context vectors, and merging of word and context vectors. The experiments were conducted on two datasets, SST-2 and CoLA, with more than 19 thousand labeled sentiment sentences. The results indicated that dependency embedding can enhance the performance of sentiment analysis at the sentence level.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Truică, Ciprian-Octavian, Elena-Simona Apostol, Maria-Luiza Șerban, and Adrian Paschke. "Topic-Based Document-Level Sentiment Analysis Using Contextual Cues." Mathematics 9, no. 21 (October 27, 2021): 2722. http://dx.doi.org/10.3390/math9212722.

Повний текст джерела
Анотація:
Document-level Sentiment Analysis is a complex task that implies the analysis of large textual content that can incorporate multiple contradictory polarities at the phrase and word levels. Most of the current approaches either represent textual data using pre-trained word embeddings without considering the local context that can be extracted from the dataset, or they detect the overall topic polarity without considering both the local and global context. In this paper, we propose a novel document-topic embedding model, DocTopic2Vec, for document-level polarity detection in large texts by employing general and specific contextual cues obtained through the use of document embeddings (Doc2Vec) and Topic Modeling. In our approach, (1) we use a large dataset with game reviews to create different word embeddings by applying Word2Vec, FastText, and GloVe, (2) we create Doc2Vecs enriched with the local context given by the word embeddings for each review, (3) we construct topic embeddings Topic2Vec using three Topic Modeling algorithms, i.e., LDA, NMF, and LSI, to enhance the global context of the Sentiment Analysis task, (4) for each document and its dominant topic, we build the new DocTopic2Vec by concatenating the Doc2Vec with the Topic2Vec created with the same word embedding. We also design six new Convolutional-based (Bidirectional) Recurrent Deep Neural Network Architectures that show promising results for this task. The proposed DocTopic2Vecs are used to benchmark multiple Machine and Deep Learning models, i.e., a Logistic Regression model, used as a baseline, and 18 Deep Neural Networks Architectures. The experimental results show that the new embedding and the new Deep Neural Network Architectures achieve better results than the baseline, i.e., Logistic Regression and Doc2Vec.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Li, Qizhi, Xianyong Li, Yajun Du, Yongquan Fan, and Xiaoliang Chen. "A New Sentiment-Enhanced Word Embedding Method for Sentiment Analysis." Applied Sciences 12, no. 20 (October 11, 2022): 10236. http://dx.doi.org/10.3390/app122010236.

Повний текст джерела
Анотація:
Since some sentiment words have similar syntactic and semantic features in the corpus, existing pre-trained word embeddings always perform poorly in sentiment analysis tasks. This paper proposes a new sentiment-enhanced word embedding (S-EWE) method to improve the effectiveness of sentence-level sentiment classification. This sentiment enhancement method takes full advantage of the mapping relationship between word embeddings and their corresponding sentiment orientations. This method first converts words to word embeddings and assigns sentiment mapping vectors to all word embeddings. Then, word embeddings and their corresponding sentiment mapping vectors are fused to S-EWEs. After reducing the dimensions of S-EWEs through a fully connected layer, the predicted sentiment orientations are obtained. The S-EWE method adopts the cross-entropy function to calculate the loss between predicted and true sentiment orientations, and backpropagates the loss to train the sentiment mapping vectors. Experiments show that the accuracy and macro-F1 values of six sentiment classification models using Word2Vec and GloVe with the S-EWEs are on average 1.07% and 1.58% higher than those without the S-EWEs on the SemEval-2013 dataset, and on average 1.23% and 1.26% higher than those without the S-EWEs on the SST-2 dataset. In all baseline models with S-EWEs, the convergence time of the attention-based bidirectional CNN-RNN deep model (ABCDM) with S-EWEs was significantly decreased by 51.21% of ABCDM on the SemEval-2013 dataset. The convergence time of CNN-LSTM with S-EWEs was vastly reduced by 41.34% of CNN-LSTM on the SST-2 dataset. In addition, the S-EWE method is not valid for contextualized word embedding models. The main reasons are that the S-EWE method only enhances the embedding layer of the models and has no effect on the models themselves.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Beckner, William. "Estimates on Moser Embedding." Potential Analysis 20, no. 4 (June 2004): 345–59. http://dx.doi.org/10.1023/b:pota.0000009813.38619.47.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Górka, Przemysław, Tomasz Kostrzewa, and Enrique G. Reyes. "Sobolev Spaces on Locally Compact Abelian Groups: Compact Embeddings and Local Spaces." Journal of Function Spaces 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/404738.

Повний текст джерела
Анотація:
We continue our research on Sobolev spaces on locally compact abelian (LCA) groups motivated by our work on equations with infinitely many derivatives of interest for string theory and cosmology. In this paper, we focus on compact embedding results and we prove an analog for LCA groups of the classical Rellich lemma and of the Rellich-Kondrachov compactness theorem. Furthermore, we introduce Sobolev spaces on subsets of LCA groups and study its main properties, including the existence of compact embeddings intoLp-spaces.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

KARLSSON, FRED. "Constraints on multiple center-embedding of clauses." Journal of Linguistics 43, no. 2 (June 18, 2007): 365–92. http://dx.doi.org/10.1017/s0022226707004616.

Повний текст джерела
Анотація:
A common view in theoretical syntax and computational linguistics holds that there are no grammatical restrictions on multiple center-embedding of clauses. Syntax would thus be characterized by unbounded recursion. An analysis of 119 genuine multiple clausal center-embeddings from seven ‘Standard Average European’ languages (English, Finnish, French, German, Latin, Swedish, Danish) uncovers usage-based regularities, constraints, that run counter to these and several other widely held views, such as that any type of multiple self-embedding (of the same clause type) would be possible, or that self-embedding would be more complex than multiple center-embedding of different clause types. The maximal degree of center-embedding in written language is three. In spoken language, multiple center-embedding is practically absent. Typical center-embeddings of any degree involve relative clauses specifying the referent of the subject NP of the superordinate clause. Only postmodifying clauses, especially relative clauses and that-clauses acting as noun complements, allow central self-embedding. Double relativization of objects (The rat the cat the dog chased killed ate the malt) does not occur. These corpus-based ‘soft constraints’ suggest that full-blown recursion creating multiple clausal center-embedding is not a central design feature of language in use. Multiple center-embedding emerged with the advent of written language, with Aristotle, Cicero, and Livy in the Greek and Latin stylistic tradition of ‘periodic’ sentence composition.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Pietrasik, Marcin, and Marek Z. Reformat. "Probabilistic Coarsening for Knowledge Graph Embeddings." Axioms 12, no. 3 (March 6, 2023): 275. http://dx.doi.org/10.3390/axioms12030275.

Повний текст джерела
Анотація:
Knowledge graphs have risen in popularity in recent years, demonstrating their utility in applications across the spectrum of computer science. Finding their embedded representations is thus highly desirable as it makes them easily operated on and reasoned with by machines. With this in mind, we propose a simple meta-strategy for embedding knowledge graphs using probabilistic coarsening. In this approach, a knowledge graph is first coarsened before being embedded by an arbitrary embedding method. The resulting coarse embeddings are then extended down as those of the initial knowledge graph. Although straightforward, this allows for faster training by reducing knowledge graph complexity while revealing its higher-order structures. We demonstrate this empirically on four real-world datasets, which show that coarse embeddings are learned faster and are often of higher quality. We conclude that coarsening is a recommended prepossessing step regardless of the underlying embedding method used.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Liu, Yi, Chengyu Yin, Jingwei Li, Fang Wang, and Senzhang Wang. "Predicting Dynamic User–Item Interaction with Meta-Path Guided Recursive RNN." Algorithms 15, no. 3 (February 28, 2022): 80. http://dx.doi.org/10.3390/a15030080.

Повний текст джерела
Анотація:
Accurately predicting user–item interactions is critically important in many real applications, including recommender systems and user behavior analysis in social networks. One major drawback of existing studies is that they generally directly analyze the sparse user–item interaction data without considering their semantic correlations and the structural information hidden in the data. Another limitation is that existing approaches usually embed the users and items into the different embedding spaces in a static way, but ignore the dynamic characteristics of both users and items. In this paper, we propose to learn the dynamic embedding vector trajectories rather than the static embedding vectors for users and items simultaneously. A Metapath-guided Recursive RNN based Shift embedding method named MRRNN-S is proposed to learn the continuously evolving embeddings of users and items for more accurately predicting their future interactions. The proposed MRRNN-S is extended from our previous model RRNN-S which was proposed in the earlier work. Comparedwith RRNN-S, we add the word2vec module and the skip-gram-based meta-path module to better capture the rich auxiliary information from the user–item interaction data. Specifically, we first regard the interaction data of each user with items as sentence data to model their semantic and sequential information and construct the user–item interaction graph. Then we sample the instances of meta-paths to capture the heterogeneity and structural information from the user–item interaction graph. A recursive RNN is proposed to iteratively and mutually learn the dynamic user and item embeddings in the same latent space based on their historical interactions. Next, a shift embedding module is proposed to predict the future user embeddings. To predict which item a user will interact with, we output the item embedding instead of the pairwise interaction probability between users and items, which is much more efficient. Through extensive experiments on three real-world datasets, we demonstrate that MRRNN-S achieves superior performance by extensive comparison with state-of-the-art baseline models.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Agustiningsih, Kartikasari Kusuma, Ema Utami, and Muhammad Altoumi Alsyaibani. "Sentiment Analysis of COVID-19 Vaccines in Indonesia on Twitter Using Pre-Trained and Self-Training Word Embeddings." Jurnal Ilmu Komputer dan Informasi 15, no. 1 (February 27, 2022): 39–46. http://dx.doi.org/10.21609/jiki.v15i1.1044.

Повний текст джерела
Анотація:
Sentiment analysis regarding the COVID-19 vaccine can be obtained from social media because users usually express their opinions through social media. One of the social media that is most often used by Indonesian people to express their opinion is Twitter. The method used in this research is Bidirectional LSTM which will be combined with word embedding. In this study, fastText and GloVe were tested as word embedding. We created 8 test scenarios to inspect performance of the word embeddings, using both pre-trained and self-trained word embedding vectors. Dataset gathered from Twitter was prepared as stemmed dataset and unstemmed dataset. The highest accuracy from GloVe scenario group was generated by model which used self-trained GloVe and trained on unstemmed dataset. The accuracy reached 92.5%. On the other hand, the highest accuracy from fastText scenario group generated by model which used self-trained fastText and trained on stemmed dataset. The accuracy reached 92.3%. In other scenarios that used pre-trained embedding vector, the accuracy was quite lower than scenarios that used self-trained embedding vector, because the pre-trained embedding data was trained using the Wikipedia corpus which contains standard and well-structured language while the dataset used in this study came from Twitter which contains non-standard sentences. Even though the dataset was processed using stemming and slang words dictionary, the pre-trained embedding still can not recognize several words from our dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Zhang, Xiang. "Embedding smooth diffeomorphisms in flows." Journal of Differential Equations 248, no. 7 (April 2010): 1603–16. http://dx.doi.org/10.1016/j.jde.2009.09.013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Tan, Eugene, Shannon Algar, Débora Corrêa, Michael Small, Thomas Stemler, and David Walker. "Selecting embedding delays: An overview of embedding techniques and a new method using persistent homology." Chaos: An Interdisciplinary Journal of Nonlinear Science 33, no. 3 (March 2023): 032101. http://dx.doi.org/10.1063/5.0137223.

Повний текст джерела
Анотація:
Delay embedding methods are a staple tool in the field of time series analysis and prediction. However, the selection of embedding parameters can have a big impact on the resulting analysis. This has led to the creation of a large number of methods to optimize the selection of parameters such as embedding lag. This paper aims to provide a comprehensive overview of the fundamentals of embedding theory for readers who are new to the subject. We outline a collection of existing methods for selecting embedding lag in both uniform and non-uniform delay embedding cases. Highlighting the poor dynamical explainability of existing methods of selecting non-uniform lags, we provide an alternative method of selecting embedding lags that includes a mixture of both dynamical and topological arguments. The proposed method, Significant Times on Persistent Strands (SToPS), uses persistent homology to construct a characteristic time spectrum that quantifies the relative dynamical significance of each time lag. We test our method on periodic, chaotic, and fast-slow time series and find that our method performs similar to existing automated non-uniform embedding methods. Additionally, [Formula: see text]-step predictors trained on embeddings constructed with SToPS were found to outperform other embedding methods when predicting fast-slow time series.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Li, Quanzhi, Sameena Shah, Xiaomo Liu, and Armineh Nourbakhsh. "Data Sets: Word Embeddings Learned from Tweets and General Data." Proceedings of the International AAAI Conference on Web and Social Media 11, no. 1 (May 3, 2017): 428–36. http://dx.doi.org/10.1609/icwsm.v11i1.14859.

Повний текст джерела
Анотація:
A word embedding is a low-dimensional, dense and real-valued vector representation of a word. Word embeddings have been used in many NLP tasks. They are usually generated from a large text corpus. The embedding of a word captures both its syntactic and semantic aspects. Tweets are short, noisy and have unique lexical and semantic features that are different from other types of text. Therefore, it is necessary to have word embeddings learned specifically from tweets. In this paper, we present ten word embedding data sets. In addition to the data sets learned from just tweet data, we also built embedding sets from the general data and the combination of tweets and the general data. The general data consist of news articles, Wikipedia data and other web data. These ten embedding models were learned from about 400 million tweets and 7 billion words from the general data. In this paper, we also present two experiments demonstrating how to use the data sets in some NLP tasks, such as tweet sentiment analysis and tweet topic classification tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Goel, Mukta, and Rohit Goel. "Comparative Analysis of Hybrid Transform Domain Image Steganography Embedding Techniques." International Journal of Scientific Research 2, no. 2 (June 1, 2012): 388–90. http://dx.doi.org/10.15373/22778179/feb2013/131.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Garg, Nikhil, Londa Schiebinger, Dan Jurafsky, and James Zou. "Word embeddings quantify 100 years of gender and ethnic stereotypes." Proceedings of the National Academy of Sciences 115, no. 16 (April 3, 2018): E3635—E3644. http://dx.doi.org/10.1073/pnas.1720347115.

Повний текст джерела
Анотація:
Word embeddings are a powerful machine-learning framework that represents each English word by a vector. The geometric relationship between these vectors captures meaningful semantic relationships between the corresponding words. In this paper, we develop a framework to demonstrate how the temporal dynamics of the embedding helps to quantify changes in stereotypes and attitudes toward women and ethnic minorities in the 20th and 21st centuries in the United States. We integrate word embeddings trained on 100 y of text data with the US Census to show that changes in the embedding track closely with demographic and occupation shifts over time. The embedding captures societal shifts—e.g., the women’s movement in the 1960s and Asian immigration into the United States—and also illuminates how specific adjectives and occupations became more closely associated with certain populations over time. Our framework for temporal analysis of word embedding opens up a fruitful intersection between machine learning and quantitative social science.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Spector, Daniel. "An optimal Sobolev embedding for L1." Journal of Functional Analysis 279, no. 3 (August 2020): 108559. http://dx.doi.org/10.1016/j.jfa.2020.108559.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Friz, Peter, and Nicolas Victoir. "A variation embedding theorem and applications." Journal of Functional Analysis 239, no. 2 (October 2006): 631–37. http://dx.doi.org/10.1016/j.jfa.2005.12.021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Netzer, Tim, and Andreas Thom. "Tracial algebras and an embedding theorem." Journal of Functional Analysis 259, no. 11 (December 2010): 2939–60. http://dx.doi.org/10.1016/j.jfa.2010.08.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Wang, Zhuo-yuan, Xiao-yan Li, Xiao-jun Gou, Chun-lan Chen, Zun-yuan Li, Chuang Zhao, Wen-ge Huo, Yu-hong Guo, Yan Yang, and Zhi-dan Liu. "Network Meta-Analysis of Acupoint Catgut Embedding in Treatment of Simple Obesity." Evidence-Based Complementary and Alternative Medicine 2022 (May 23, 2022): 1–16. http://dx.doi.org/10.1155/2022/6408073.

Повний текст джерела
Анотація:
Objective. To evaluate the clinical efficacy of acupoint catgut embedding in the treatment of simple obesity through network meta-analysis. Methods. PubMed, Cochrane, Embase, China National Knowledge Infrastructure (CNKI), Wanfang, and VIP database (VIP) were searched by using computer from 2011 to August 2021, and 35 RCT studies were retrieved. The quality of the literature was evaluated using the modified Jadad scoring table, and Stata 15.0 software was used for traditional meta-analysis and network meta-analysis. Results. Thirty-five RCTs (3040 cases in total) were included. Acupoint embedding, acupuncture, electroacupuncture, TCM, acupoint embedding + acupuncture, acupoint embedding + exercise diet therapy, acupoint embedding + TCM, exercise diet therapy, acupoint embedding + moxibustion, and acupoint embedding + cupping were investigated in the studies. The results of network meta-analysis were as follows: in terms of total effective rate, acupoint catgut embedding was superior to acupuncture, electroacupuncture, and exercise diet therapy ( P < 0.05 ); electroacupuncture, acupoint catgut embedding + acupuncture, acupoint catgut embedding + exercise diet therapy, acupoint catgut + TCM, acupoint catgut + moxibustion, and acupoint catgut + cupping were superior to acupuncture ( P < 0.05 ); acupoint catgut + moxibustion was superior to electroacupuncture ( P < 0.05 ); acupoint catgut + TCM, acupoint catgut + moxibustion, and acupoint catgut + cupping were superior to TCM treatment ( P < 0.05 ); and electroacupuncture, acupoint catgut, acupoint catgut + acupuncture, acupoint catgut + exercise diet therapy, acupoint catgut + TCM, acupoint catgut embedding + moxibustion, and acupoint catgut embedding + cupping were superior to sports diet therapy ( P < 0.05 ). Regarding weight loss, acupuncture treatment was superior to acupoint catgut embedding therapy ( P < 0.05 ); acupoint catgut embedding + exercise diet therapy, acupoint catgut embedding + TCM, acupoint catgut embedding + moxibustion, and acupoint catgut embedding + cupping were superior to acupuncture and electroacupuncture treatment ( P < 0.05 ); acupoint catgut embedding + exercise diet therapy, acupoint catgut embedding + TCM, and acupoint catgut embedding + moxibustion were superior to TCM treatment ( P < 0.05 ); and acupoint catgut embedding, acupoint catgut embedding + acupuncture, catgut embedding + exercise diet therapy, acupoint catgut embedding + TCM, acupoint catgut embedding + moxibustion, and acupoint catgut embedding + cupping were superior to exercise diet therapy ( P < 0.05 ). In terms of BMI reduction, acupoint catgut embedding + moxibustion and acupoint catgut embedding + cupping were more evident than acupuncture treatment ( P < 0.05 ); and acupoint catgut embedding + moxibustion was more evident than electroacupuncture treatment ( P < 0.05 ). Conclusion. Acupoint catgut embedding and its combination with other therapies are the first choice for the treatment of simple obesity.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Li, Yu, Yuan Tian, Jiawei Zhang, and Yi Chang. "Learning Signed Network Embedding via Graph Attention." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4772–79. http://dx.doi.org/10.1609/aaai.v34i04.5911.

Повний текст джерела
Анотація:
Learning the low-dimensional representations of graphs (i.e., network embedding) plays a critical role in network analysis and facilitates many downstream tasks. Recently graph convolutional networks (GCNs) have revolutionized the field of network embedding, and led to state-of-the-art performance in network analysis tasks such as link prediction and node classification. Nevertheless, most of the existing GCN-based network embedding methods are proposed for unsigned networks. However, in the real world, some of the networks are signed, where the links are annotated with different polarities, e.g., positive vs. negative. Since negative links may have different properties from the positive ones and can also significantly affect the quality of network embedding. Thus in this paper, we propose a novel network embedding framework SNEA to learn Signed Network Embedding via graph Attention. In particular, we propose a masked self-attentional layer, which leverages self-attention mechanism to estimate the importance coefficient for pair of nodes connected by different type of links during the embedding aggregation process. Then SNEA utilizes the masked self-attentional layers to aggregate more important information from neighboring nodes to generate the node embeddings based on balance theory. Experimental results demonstrate the effectiveness of the proposed framework through signed link prediction task on several real-world signed network datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Xiao, Bai, Edwin Hancock, and Hang Yu. "Manifold embedding for shape analysis." Neurocomputing 73, no. 10-12 (June 2010): 1606–13. http://dx.doi.org/10.1016/j.neucom.2009.10.023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Junping Zhang, Qi Wang, Li He, and Zhi-Hua Zhou. "Quantitative Analysis of Nonlinear Embedding." IEEE Transactions on Neural Networks 22, no. 12 (December 2011): 1987–98. http://dx.doi.org/10.1109/tnn.2011.2171991.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Fleischmann, Oliver, Lennart Wietzke, and Gerald Sommer. "Image Analysis by Conformal Embedding." Journal of Mathematical Imaging and Vision 40, no. 3 (February 9, 2011): 305–25. http://dx.doi.org/10.1007/s10851-011-0263-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Jangi, S., and Y. Jain. "Embedding spectral analysis in equipment." IEEE Spectrum 28, no. 2 (February 1991): 40–43. http://dx.doi.org/10.1109/6.100909.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kokane, Chandrakant D., Sachin D. Babar, Parikshit N. Mahalle, and Shivprasad P. Patil. "Word sense disambiguation: Mathematical modelling of adaptive word embedding technique for word vector." Journal of Interdisciplinary Mathematics 26, no. 3 (2023): 475–82. http://dx.doi.org/10.47974/jim-1675.

Повний текст джерела
Анотація:
Word embedding is the method of representing ambiguous words into word vectors. The existing methods of word embedding are applicable for homonymous words. Constructing word vector of polysemous words is the challenge. The word vector of polysemous words are made by considering context information. The proposed adaptive word embedding technique is discussed in this article. The adaptive word embedding technique is applicable for both homosemous and polysemous words. While representing ambiguous word into word vector the context information is considered. The adaptive word embedding technique generates dynamic word vector for ambiguous word. The word vector with dimension size 198 is created here. There are 198 features are considered in the discussed model. The countable nouns are used as features in adaptive word embedding.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Shi, Yaxin, Donna Xu, Yuangang Pan, Ivor W. Tsang, and Shirui Pan. "Label Embedding with Partial Heterogeneous Contexts." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4926–33. http://dx.doi.org/10.1609/aaai.v33i01.33014926.

Повний текст джерела
Анотація:
Label embedding plays an important role in many real-world applications. To enhance the label relatedness captured by the embeddings, multiple contexts can be adopted. However, these contexts are heterogeneous and often partially observed in practical tasks, imposing significant challenges to capture the overall relatedness among labels. In this paper, we propose a general Partial Heterogeneous Context Label Embedding (PHCLE) framework to address these challenges. Categorizing heterogeneous contexts into two groups, relational context and descriptive context, we design tailor-made matrix factorization formula to effectively exploit the label relatedness in each context. With a shared embedding principle across heterogeneous contexts, the label relatedness is selectively aligned in a shared space. Due to our elegant formulation, PHCLE overcomes the partial context problem and can nicely incorporate more contexts, which both cannot be tackled with existing multi-context label embedding methods. An effective alternative optimization algorithm is further derived to solve the sparse matrix factorization problem. Experimental results demonstrate that the label embeddings obtained with PHCLE achieve superb performance in image classification task and exhibit good interpretability in the downstream label similarity analysis and image understanding task.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Franchetti, Carlo, and E. W. Cheney. "The embedding of proximinal sets." Journal of Approximation Theory 48, no. 2 (October 1986): 213–25. http://dx.doi.org/10.1016/0021-9045(86)90006-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Baalous, Rawan, and Ronald Poet. "Utilizing Sentence Embedding for Dangerous Permissions Detection in Android Apps' Privacy Policies." International Journal of Information Security and Privacy 15, no. 1 (January 2021): 173–89. http://dx.doi.org/10.4018/ijisp.2021010109.

Повний текст джерела
Анотація:
Privacy policies analysis relies on understanding sentences meaning in order to identify sentences of interest to privacy related applications. In this paper, the authors investigate the strengths and limitations of sentence embeddings to detect dangerous permissions in Android apps privacy policies. Sent2Vec sentence embedding model was utilized and trained on 130,000 Android apps privacy policies. The terminology extracted by the sentence embedding model was then compared with the gold standard on a dataset of 564 privacy policies. This work seeks to provide answers to researchers and developers interested in extracting privacy related information from privacy policies using sentence embedding models. In addition, it may help regulators interested in deploying sentence embedding models to check for privacy policies' compliance with the government regulations and to identify points of inconsistencies or violations.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Ravindran, Renjith P., and Kavi Narayana Murthy. "Syntactic Coherence in Word Embedding Spaces." International Journal of Semantic Computing 15, no. 02 (June 2021): 263–90. http://dx.doi.org/10.1142/s1793351x21500057.

Повний текст джерела
Анотація:
Word embeddings have recently become a vital part of many Natural Language Processing (NLP) systems. Word embeddings are a suite of techniques that represent words in a language as vectors in an n-dimensional real space that has been shown to encode a significant amount of syntactic and semantic information. When used in NLP systems, these representations have resulted in improved performance across a wide range of NLP tasks. However, it is not clear how syntactic properties interact with the more widely studied semantic properties of words. Or what the main factors in the modeling formulation are that encourages embedding spaces to pick up more of syntactic behavior as opposed to semantic behavior of words. We investigate several aspects of word embedding spaces and modeling assumptions that maximize syntactic coherence — the degree to which words with similar syntactic properties form distinct neighborhoods in the embedding space. We do so in order to understand which of the existing models maximize syntactic coherence making it a more reliable source for extracting syntactic category (POS) information. Our analysis shows that syntactic coherence of S-CODE is superior to the other more popular and more recent embedding techniques such as Word2vec, fastText, GloVe and LexVec, when measured under compatible parameter settings. Our investigation also gives deeper insights into the geometry of the embedding space with respect to syntactic coherence, and how this is influenced by context size, frequency of words, and dimensionality of the embedding space.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Jawale, Shila Sumol, and S. D. Sawarker. "Amalgamation of Embeddings With Model Explainability for Sentiment Analysis." International Journal of Applied Evolutionary Computation 13, no. 1 (January 1, 2022): 1–24. http://dx.doi.org/10.4018/ijaec.315629.

Повний текст джерела
Анотація:
Regarding the ubiquity of digitalization and electronic processing, an automated review processing system, also known as sentiment analysis, is crucial. There were many architectures and word embeddings employed for effective sentiment analysis. Deep learning is now-a-days becoming prominent for solving these problems as huge amounts of data get generated per second. In deep learning, word embedding acts as a feature representative and plays an important role. This paper proposed a novel deep learning architecture which represents hybrid embedding techniques that address polysemy, semantic and syntactic issues of a language model, along with justifying the model prediction. The model is evaluated on sentiment identification tasks, obtaining the result as F1-score 0.9254 and F1-score 0.88, for MR and Kindle dataset respectively. The proposed model outperforms many current techniques for both tasks in experiments, suggesting that combining context-free and context-dependent text representations potentially capture complementary features of word meaning. The model decisions justified with the help of visualization techniques such as t-SNE.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Zhao, Peilian, Cunli Mao, and Zhengtao Yu. "Semi-Supervised Aspect-Based Sentiment Analysis for Case-Related Microblog Reviews Using Case Knowledge Graph Embedding." International Journal of Asian Language Processing 30, no. 03 (September 2020): 2050012. http://dx.doi.org/10.1142/s2717554520500125.

Повний текст джерела
Анотація:
Aspect-Based Sentiment Analysis (ABSA), a fine-grained task of opinion mining, which aims to extract sentiment of specific target from text, is an important task in many real-world applications, especially in the legal field. Therefore, in this paper, we study the problem of limitation of labeled training data required and ignorance of in-domain knowledge representation for End-to-End Aspect-Based Sentiment Analysis (E2E-ABSA) in legal field. We proposed a new method under deep learning framework, named Semi-ETEKGs, which applied E2E framework using knowledge graph (KG) embedding in legal field after data augmentation (DA). Specifically, we pre-trained the BERT embedding and in-domain KG embedding for unlabeled data and labeled data with case elements after DA, and then we put two embeddings into the E2E framework to classify the polarity of target-entity. Finally, we built a case-related dataset based on a popular benchmark for ABSA to prove the efficiency of Semi-ETEKGs, and experiments on case-related dataset from microblog comments show that our proposed model outperforms the other compared methods significantly.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Nadirashvili, Nikolai, and Yu Yuan. "Improving Pogorelov’s isometric embedding counterexample." Calculus of Variations and Partial Differential Equations 32, no. 3 (March 27, 2008): 319–23. http://dx.doi.org/10.1007/s00526-007-0140-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Zhang, Xiang. "The embedding flows of C∞ hyperbolic diffeomorphisms." Journal of Differential Equations 250, no. 5 (March 2011): 2283–98. http://dx.doi.org/10.1016/j.jde.2010.12.022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Greco, L., and G. Moscariello. "An embedding theorem in Lorentz-Zygmund spaces." Potential Analysis 5, no. 6 (December 1996): 581–90. http://dx.doi.org/10.1007/bf00275795.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Spielberg, John S. "Embedding C∗-algebra extensions into AF algebras." Journal of Functional Analysis 81, no. 2 (December 1988): 325–44. http://dx.doi.org/10.1016/0022-1236(88)90104-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Khine, Aye Hninn, Wiphada Wettayaprasit, and Jarunee Duangsuwan. "A novel meta-embedding technique for drug reviews sentiment analysis." IAES International Journal of Artificial Intelligence (IJ-AI) 12, no. 4 (December 1, 2023): 1938. http://dx.doi.org/10.11591/ijai.v12.i4.pp1938-1946.

Повний текст джерела
Анотація:
<p>Traditional word embedding models have been used in the feature extraction process of deep learning models for sentiment analysis. However, these models ignore the sentiment properties of words while maintaining the contextual relationships and have inadequate representation for domainspecific words. This paper proposes a method to develop a meta embedding model by exploiting domain sentiment polarity and adverse drug reaction (ADR) features to render word embedding models more suitable for medical sentiment analysis. The proposed lexicon is developed from the medical blogs corpus. The polarity scores of the existing lexicons are adjusted to assign new polarity score to each word. The neural network model utilizes sentiment lexicons and ADR in learning refined word embedding. The refined embedding obtained from the proposed approach is concatenated with original word vectors, lexicon vectors, and ADR feature to form a meta-embedding model which maintains both contextual and sentimental properties. The final meta-embedding acts as a feature extractor to assess the effectiveness of the model in drug reviews sentiment analysis. The experiments are conducted on global vectors (GloVE) and skip-gram word2vector (Word2Vec) models. The empirical results demonstrate the proposed meta-embedding model outperforms traditional word embedding in different performance measures.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
41

YOU, QUBO, NANNING ZHENG, LING GAO, SHAOYI DU, and YANG WU. "ANALYSIS OF SOLUTION FOR SUPERVISED GRAPH EMBEDDING." International Journal of Pattern Recognition and Artificial Intelligence 22, no. 07 (November 2008): 1283–99. http://dx.doi.org/10.1142/s021800140800679x.

Повний текст джерела
Анотація:
Recently, Graph Embedding Framework has been proposed for feature extraction. However, it is still an open issue on how to compute robust discriminant transformation for this purpose. In this paper, we show that supervised graph embedding algorithms share a general criterion. Based on the analysis of this criterion, we propose a general solution, called General Solution for Supervised Graph Embedding (GSSGE), for extracting the robust discriminant transformation of Supervised Graph Embedding. Then, we analyze the superiority of our algorithm over traditional algorithms. Extensive experiments on both artificial and real-world data are performed to demonstrate the effectiveness and robustness of our proposed GSSGE.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Gunti, Nethra, Sathyanarayanan Ramamoorthy, Parth Patwa, and Amitava Das. "Memotion Analysis through the Lens of Joint Embedding (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12959–60. http://dx.doi.org/10.1609/aaai.v36i11.21616.

Повний текст джерела
Анотація:
Joint embedding (JE) is a way to encode multi-modal data into a vector space where text remains as the grounding key and other modalities like image are to be anchored with such keys. Meme is typically an image with embedded text onto it. Although, memes are commonly used for fun, they could also be used to spread hate and fake information. That along with its growing ubiquity over several social platforms has caused automatic analysis of memes to become a widespread topic of research. In this paper, we report our initial experiments on Memotion Analysis problem through joint embeddings. Results are marginally yielding SOTA.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Zhou, Houquan, Shenghua Liu, Danai Koutra, Huawei Shen, and Xueqi Cheng. "A Provable Framework of Learning Graph Embeddings via Summarization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4946–53. http://dx.doi.org/10.1609/aaai.v37i4.25621.

Повний текст джерела
Анотація:
Given a large graph, can we learn its node embeddings from a smaller summary graph? What is the relationship between embeddings learned from original graphs and their summary graphs? Graph representation learning plays an important role in many graph mining applications, but learning em-beddings of large-scale graphs remains a challenge. Recent works try to alleviate it via graph summarization, which typ-ically includes the three steps: reducing the graph size by combining nodes and edges into supernodes and superedges,learning the supernode embedding on the summary graph and then restoring the embeddings of the original nodes. How-ever, the justification behind those steps is still unknown. In this work, we propose GELSUMM, a well-formulated graph embedding learning framework based on graph sum-marization, in which we show the theoretical ground of learn-ing from summary graphs and the restoration with the three well-known graph embedding approaches in a closed form.Through extensive experiments on real-world datasets, we demonstrate that our methods can learn graph embeddings with matching or better performance on downstream tasks.This work provides theoretical analysis for learning node em-beddings via summarization and helps explain and under-stand the mechanism of the existing works.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Angerer, Philipp, David S. Fischer, Fabian J. Theis, Antonio Scialdone, and Carsten Marr. "Automatic identification of relevant genes from low-dimensional embeddings of single-cell RNA-seq data." Bioinformatics 36, no. 15 (March 24, 2020): 4291–95. http://dx.doi.org/10.1093/bioinformatics/btaa198.

Повний текст джерела
Анотація:
Abstract Motivation Dimensionality reduction is a key step in the analysis of single-cell RNA-sequencing data. It produces a low-dimensional embedding for visualization and as a calculation base for downstream analysis. Nonlinear techniques are most suitable to handle the intrinsic complexity of large, heterogeneous single-cell data. However, with no linear relation between gene and embedding coordinate, there is no way to extract the identity of genes driving any cell’s position in the low-dimensional embedding, making it difficult to characterize the underlying biological processes. Results In this article, we introduce the concepts of local and global gene relevance to compute an equivalent of principal component analysis loadings for non-linear low-dimensional embeddings. Global gene relevance identifies drivers of the overall embedding, while local gene relevance identifies those of a defined sub-region. We apply our method to single-cell RNA-seq datasets from different experimental protocols and to different low-dimensional embedding techniques. This shows our method’s versatility to identify key genes for a variety of biological processes. Availability and implementation To ensure reproducibility and ease of use, our method is released as part of destiny 3.0, a popular R package for building diffusion maps from single-cell transcriptomic data. It is readily available through Bioconductor. Supplementary information Supplementary data are available at Bioinformatics online.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Alachram, Halima, Hryhorii Chereda, Tim Beißbarth, Edgar Wingender, and Philip Stegmaier. "Text mining-based word representations for biomedical data analysis and protein-protein interaction networks in machine learning tasks." PLOS ONE 16, no. 10 (October 15, 2021): e0258623. http://dx.doi.org/10.1371/journal.pone.0258623.

Повний текст джерела
Анотація:
Biomedical and life science literature is an essential way to publish experimental results. With the rapid growth of the number of new publications, the amount of scientific knowledge represented in free text is increasing remarkably. There has been much interest in developing techniques that can extract this knowledge and make it accessible to aid scientists in discovering new relationships between biological entities and answering biological questions. Making use of the word2vec approach, we generated word vector representations based on a corpus consisting of over 16 million PubMed abstracts. We developed a text mining pipeline to produce word2vec embeddings with different properties and performed validation experiments to assess their utility for biomedical analysis. An important pre-processing step consisted in the substitution of synonymous terms by their preferred terms in biomedical databases. Furthermore, we extracted gene-gene networks from two embedding versions and used them as prior knowledge to train Graph-Convolutional Neural Networks (CNNs) on large breast cancer gene expression data and on other cancer datasets. Performances of resulting models were compared to Graph-CNNs trained with protein-protein interaction (PPI) networks or with networks derived using other word embedding algorithms. We also assessed the effect of corpus size on the variability of word representations. Finally, we created a web service with a graphical and a RESTful interface to extract and explore relations between biomedical terms using annotated embeddings. Comparisons to biological databases showed that relations between entities such as known PPIs, signaling pathways and cellular functions, or narrower disease ontology groups correlated with higher cosine similarity. Graph-CNNs trained with word2vec-embedding-derived networks performed sufficiently good for the metastatic event prediction tasks compared to other networks. Such performance was good enough to validate the utility of our generated word embeddings in constructing biological networks. Word representations as produced by text mining algorithms like word2vec, therefore are able to capture biologically meaningful relations between entities. Our generated embeddings are publicly available at https://github.com/genexplain/Word2vec-based-Networks/blob/main/README.md.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Wang, Bin, Yu Chen, Jinfang Sheng, and Zhengkun He. "Attributed Graph Embedding Based on Attention with Cluster." Mathematics 10, no. 23 (December 1, 2022): 4563. http://dx.doi.org/10.3390/math10234563.

Повний текст джерела
Анотація:
Graph embedding is of great significance for the research and analysis of graphs. Graph embedding aims to map nodes in the network to low-dimensional vectors while preserving information in the original graph of nodes. In recent years, the appearance of graph neural networks has significantly improved the accuracy of graph embedding. However, the influence of clusters was not considered in existing graph neural network (GNN)-based methods, so this paper proposes a new method to incorporate the influence of clusters into the generation of graph embedding. We use the attention mechanism to pass the message of the cluster pooled result and integrate the whole process into the graph autoencoder as the third layer of the encoder. The experimental results show that our model has made great improvement over the baseline methods in the node clustering and link prediction tasks, demonstrating that the embeddings generated by our model have excellent expressiveness.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Das, Kajal. "From the geometry of box spaces to the geometry and measured couplings of groups." Journal of Topology and Analysis 10, no. 02 (June 2018): 401–20. http://dx.doi.org/10.1142/s1793525318500127.

Повний текст джерела
Анотація:
In this paper, we prove that if two “box spaces” of two residually finite groups are coarsely equivalent, then the two groups are “uniform measured equivalent” (UME). More generally, we prove that if there is a coarse embedding of one box space into another box space, then there exists a “uniform measured equivalent embedding” (UME-embedding) of the first group into the second one. This is a reinforcement of the easier fact that a coarse equivalence (resp.ã coarse embedding) between the box spaces gives rise to a coarse equivalence (resp.ã coarse embedding) between the groups. We deduce new invariants that distinguish box spaces up to coarse embedding and coarse equivalence. In particular, we obtain that the expanders coming from [Formula: see text] cannot be coarsely embedded inside the expanders of [Formula: see text], where [Formula: see text] and [Formula: see text]. Moreover, we obtain a countable class of residually finite groups which are mutually coarse-equivalent but any of their box spaces are not coarse-equivalent.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Gauthier, P. M., and E. S. Zeron. "Embedding Stein manifolds and tangential approximation." Complex Variables and Elliptic Equations 51, no. 8-11 (August 2006): 953–58. http://dx.doi.org/10.1080/17476930600673005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

J., Andrew Teoh B., and Ying Han Pang. "Analysis on Supervised Neighborhood Preserving Embedding." IEICE Electronics Express 6, no. 23 (2009): 1631–37. http://dx.doi.org/10.1587/elex.6.1631.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zheng, Jianwei, Hong Qiu, Xinli Xu, Wanliang Wang, and Qiongfang Huang. "Fast Discriminative Stochastic Neighbor Embedding Analysis." Computational and Mathematical Methods in Medicine 2013 (2013): 1–14. http://dx.doi.org/10.1155/2013/106867.

Повний текст джерела
Анотація:
Feature is important for many applications in biomedical signal analysis and living system analysis. A fast discriminative stochastic neighbor embedding analysis (FDSNE) method for feature extraction is proposed in this paper by improving the existing DSNE method. The proposed algorithm adopts an alternative probability distribution model constructed based on itsK-nearest neighbors from the interclass and intraclass samples. Furthermore, FDSNE is extended to nonlinear scenarios using the kernel trick and then kernel-based methods, that is, KFDSNE1 and KFDSNE2. FDSNE, KFDSNE1, and KFDSNE2 are evaluated in three aspects: visualization, recognition, and elapsed time. Experimental results on several datasets show that, compared with DSNE and MSNP, the proposed algorithm not only significantly enhances the computational efficiency but also obtains higher classification accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії