Artículos de revistas sobre el tema "Sentence Embedding Spaces"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Sentence Embedding Spaces.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 18 mejores artículos de revistas para su investigación sobre el tema "Sentence Embedding Spaces".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Nguyen, Huy Manh, Tomo Miyazaki, Yoshihiro Sugaya y Shinichiro Omachi. "Multiple Visual-Semantic Embedding for Video Retrieval from Query Sentence". Applied Sciences 11, n.º 7 (3 de abril de 2021): 3214. http://dx.doi.org/10.3390/app11073214.

Texto completo
Resumen
Visual-semantic embedding aims to learn a joint embedding space where related video and sentence instances are located close to each other. Most existing methods put instances in a single embedding space. However, they struggle to embed instances due to the difficulty of matching visual dynamics in videos to textual features in sentences. A single space is not enough to accommodate various videos and sentences. In this paper, we propose a novel framework that maps instances into multiple individual embedding spaces so that we can capture multiple relationships between instances, leading to compelling video retrieval. We propose to produce a final similarity between instances by fusing similarities measured in each embedding space using a weighted sum strategy. We determine the weights according to a sentence. Therefore, we can flexibly emphasize an embedding space. We conducted sentence-to-video retrieval experiments on a benchmark dataset. The proposed method achieved superior performance, and the results are competitive to state-of-the-art methods. These experimental results demonstrated the effectiveness of the proposed multiple embedding approach compared to existing methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Liu, Yi, Chengyu Yin, Jingwei Li, Fang Wang y Senzhang Wang. "Predicting Dynamic User–Item Interaction with Meta-Path Guided Recursive RNN". Algorithms 15, n.º 3 (28 de febrero de 2022): 80. http://dx.doi.org/10.3390/a15030080.

Texto completo
Resumen
Accurately predicting user–item interactions is critically important in many real applications, including recommender systems and user behavior analysis in social networks. One major drawback of existing studies is that they generally directly analyze the sparse user–item interaction data without considering their semantic correlations and the structural information hidden in the data. Another limitation is that existing approaches usually embed the users and items into the different embedding spaces in a static way, but ignore the dynamic characteristics of both users and items. In this paper, we propose to learn the dynamic embedding vector trajectories rather than the static embedding vectors for users and items simultaneously. A Metapath-guided Recursive RNN based Shift embedding method named MRRNN-S is proposed to learn the continuously evolving embeddings of users and items for more accurately predicting their future interactions. The proposed MRRNN-S is extended from our previous model RRNN-S which was proposed in the earlier work. Comparedwith RRNN-S, we add the word2vec module and the skip-gram-based meta-path module to better capture the rich auxiliary information from the user–item interaction data. Specifically, we first regard the interaction data of each user with items as sentence data to model their semantic and sequential information and construct the user–item interaction graph. Then we sample the instances of meta-paths to capture the heterogeneity and structural information from the user–item interaction graph. A recursive RNN is proposed to iteratively and mutually learn the dynamic user and item embeddings in the same latent space based on their historical interactions. Next, a shift embedding module is proposed to predict the future user embeddings. To predict which item a user will interact with, we output the item embedding instead of the pairwise interaction probability between users and items, which is much more efficient. Through extensive experiments on three real-world datasets, we demonstrate that MRRNN-S achieves superior performance by extensive comparison with state-of-the-art baseline models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Qian, Chen, Fuli Feng, Lijie Wen y Tat-Seng Chua. "Conceptualized and Contextualized Gaussian Embedding". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 15 (18 de mayo de 2021): 13683–91. http://dx.doi.org/10.1609/aaai.v35i15.17613.

Texto completo
Resumen
Word embedding can represent a word as a point vector or a Gaussian distribution in high-dimensional spaces. Gaussian distribution is innately more expressive than point vector owing to the ability to additionally capture semantic uncertainties of words, and thus can express asymmetric relations among words more naturally (e.g., animal entails cat but not the reverse. However, previous Gaussian embedders neglect inner-word conceptual knowledge and lack tailored Gaussian contextualizer, leading to inferior performance on both intrinsic (context-agnostic) and extrinsic (context-sensitive) tasks. In this paper, we first propose a novel Gaussian embedder which explicitly accounts for inner-word conceptual units (sememes) to represent word semantics more precisely; during learning, we propose Gaussian Distribution Attention over Gaussian representations to adaptively aggregate multiple sememe distributions into a word distribution, which guarantees the Gaussian linear combination property. Additionally, we propose a Gaussian contextualizer to utilize outer-word contexts in a sentence, producing contextualized Gaussian representations for context-sensitive tasks. Extensive experiments on intrinsic and extrinsic tasks demonstrate the effectiveness of the proposed approach, achieving state-of-the-art performance with near 5.00% relative improvement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Cantini, Riccardo, Fabrizio Marozzo, Giovanni Bruno y Paolo Trunfio. "Learning Sentence-to-Hashtags Semantic Mapping for Hashtag Recommendation on Microblogs". ACM Transactions on Knowledge Discovery from Data 16, n.º 2 (30 de abril de 2022): 1–26. http://dx.doi.org/10.1145/3466876.

Texto completo
Resumen
The growing use of microblogging platforms is generating a huge amount of posts that need effective methods to be classified and searched. In Twitter and other social media platforms, hashtags are exploited by users to facilitate the search, categorization, and spread of posts. Choosing the appropriate hashtags for a post is not always easy for users, and therefore posts are often published without hashtags or with hashtags not well defined. To deal with this issue, we propose a new model, called HASHET ( HAshtag recommendation using Sentence-to-Hashtag Embedding Translation ), aimed at suggesting a relevant set of hashtags for a given post. HASHET is based on two independent latent spaces for embedding the text of a post and the hashtags it contains. A mapping process based on a multi-layer perceptron is then used for learning a translation from the semantic features of the text to the latent representation of its hashtags. We evaluated the effectiveness of two language representation models for sentence embedding and tested different search strategies for semantic expansion, finding out that the combined use of BERT ( Bidirectional Encoder Representation from Transformer ) and a global expansion strategy leads to the best recommendation results. HASHET has been evaluated on two real-world case studies related to the 2016 United States presidential election and COVID-19 pandemic. The results reveal the effectiveness of HASHET in predicting one or more correct hashtags, with an average F -score up to 0.82 and a recommendation hit-rate up to 0.92. Our approach has been compared to the most relevant techniques used in the literature ( generative models , unsupervised models, and attention-based supervised models ) by achieving up to 15% improvement in F -score for the hashtag recommendation task and 9% for the topic discovery task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Yachao, Runze Hu, Ronghui Li, Yanyun Qu, Yuan Xie y Xiu Li. "Cross-Modal Match for Language Conditioned 3D Object Grounding". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 7 (24 de marzo de 2024): 7359–67. http://dx.doi.org/10.1609/aaai.v38i7.28566.

Texto completo
Resumen
Language conditioned 3D object grounding aims to find the object within the 3D scene mentioned by natural language descriptions, which mainly depends on the matching between visual and natural language. Considerable improvement in grounding performance is achieved by improving the multimodal fusion mechanism or bridging the gap between detection and matching. However, several mismatches are ignored, i.e., mismatch in local visual representation and global sentence representation, and mismatch in visual space and corresponding label word space. In this paper, we propose crossmodal match for 3D grounding from mitigating these mismatches perspective. Specifically, to match local visual features with the global description sentence, we propose BEV (Bird’s-eye-view) based global information embedding module. It projects multiple object proposal features into the BEV and the relations of different objects are accessed by the visual transformer which can model both positions and features with long-range dependencies. To circumvent the mismatch in feature spaces of different modalities, we propose crossmodal consistency learning. It performs cross-modal consistency constraints to convert the visual feature space into the label word feature space resulting in easier matching. Besides, we introduce label distillation loss and global distillation loss to drive these matches learning in a distillation way. We evaluate our method in mainstream evaluation settings on three datasets, and the results demonstrate the effectiveness of the proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Dancygier, Barbara. "Mental space embeddings, counterfactuality, and the use of unless". English Language and Linguistics 6, n.º 2 (9 de octubre de 2002): 347–77. http://dx.doi.org/10.1017/s1360674302000278.

Texto completo
Resumen
Unless-constructions have often been compared with conditionals. It was noted that unless can in most cases be paraphrased with if not, but that its meaning resembles that of except if (Geis, 1973; von Fintel, 1991). Initially, it was also assumed that, unlike if-conditionals, unless-sentences with counterfactual (or irrealis) meanings are not acceptable. In recent studies by Declerck and Reed (2000, 2001), however, the acceptability of such sentences was demonstrated and a new analysis was proposed.The present article argues for an account of irrealis unless-sentences in terms of epistemic distance and mental space embeddings. First, the use of verb forms in irrealis sentences is described as an instance of the use of distanced forms, which are widely used in English to mark hypotheticality. In the second part, the theory of mental spaces is introduced and applied to show how different mental space set-ups (in conjunction with distanced forms) account for the construction of different hypothetical meanings. The so-called irrealis unless-sentences are then interpreted as a number of instances of mental space embeddings. Finally, it is shown how the account proposed explains the fact that some unless-constructions can be paraphrased only with if not while others only with except if.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Amigo, Enrique, Alejandro Ariza-Casabona, Victor Fresno y M. Antonia Marti. "Information Theory–based Compositional Distributional Semantics". Computational Linguistics 48, n.º 4 (2022): 907–48. http://dx.doi.org/10.1162/_.

Texto completo
Resumen
Abstract In the context of text representation, Compositional Distributional Semantics models aim to fuse the Distributional Hypothesis and the Principle of Compositionality. Text embedding is based on co-ocurrence distributions and the representations are in turn combined by compositional functions taking into account the text structure. However, the theoretical basis of compositional functions is still an open issue. In this article we define and study the notion of Information Theory–based Compositional Distributional Semantics (ICDS): (i) We first establish formal properties for embedding, composition, and similarity functions based on Shannon’s Information Theory; (ii) we analyze the existing approaches under this prism, checking whether or not they comply with the established desirable properties; (iii) we propose two parameterizable composition and similarity functions that generalize traditional approaches while fulfilling the formal properties; and finally (iv) we perform an empirical study on several textual similarity datasets that include sentences with a high and low lexical overlap, and on the similarity between words and their description. Our theoretical analysis and empirical results show that fulfilling formal properties affects positively the accuracy of text representation models in terms of correspondence (isometry) between the embedding and meaning spaces.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Faraz, Anum, Fardin Ahsan, Jinane Mounsef, Ioannis Karamitsos y Andreas Kanavos. "Enhancing Child Safety in Online Gaming: The Development and Application of Protectbot, an AI-Powered Chatbot Framework". Information 15, n.º 4 (19 de abril de 2024): 233. http://dx.doi.org/10.3390/info15040233.

Texto completo
Resumen
This study introduces Protectbot, an innovative chatbot framework designed to improve safety in children’s online gaming environments. At its core, Protectbot incorporates DialoGPT, a conversational Artificial Intelligence (AI) model rooted in Generative Pre-trained Transformer 2 (GPT-2) technology, engineered to simulate human-like interactions within gaming chat rooms. The framework is distinguished by a robust text classification strategy, rigorously trained on the Publicly Available Natural 2012 (PAN12) dataset, aimed at identifying and mitigating potential sexual predatory behaviors through chat conversation analysis. By utilizing fastText for word embeddings to vectorize sentences, we have refined a support vector machine (SVM) classifier, achieving remarkable performance metrics, with recall, accuracy, and F-scores approaching 0.99. These metrics not only demonstrate the classifier’s effectiveness, but also signify a significant advancement beyond existing methodologies in this field. The efficacy of our framework is additionally validated on a custom dataset, composed of 71 predatory chat logs from the Perverted Justice website, further establishing the reliability and robustness of our classifier. Protectbot represents a crucial innovation in enhancing child safety within online gaming communities, providing a proactive, AI-enhanced solution to detect and address predatory threats promptly. Our findings highlight the immense potential of AI-driven interventions to create safer digital spaces for young users.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Croce, Danilo, Giuseppe Castellucci y Roberto Basili. "Adversarial training for few-shot text classification". Intelligenza Artificiale 14, n.º 2 (11 de enero de 2021): 201–14. http://dx.doi.org/10.3233/ia-200051.

Texto completo
Resumen
In recent years, Deep Learning methods have become very popular in classification tasks for Natural Language Processing (NLP); this is mainly due to their ability to reach high performances by relying on very simple input representations, i.e., raw tokens. One of the drawbacks of deep architectures is the large amount of annotated data required for an effective training. Usually, in Machine Learning this problem is mitigated by the usage of semi-supervised methods or, more recently, by using Transfer Learning, in the context of deep architectures. One recent promising method to enable semi-supervised learning in deep architectures has been formalized within Semi-Supervised Generative Adversarial Networks (SS-GANs) in the context of Computer Vision. In this paper, we adopt the SS-GAN framework to enable semi-supervised learning in the context of NLP. We demonstrate how an SS-GAN can boost the performances of simple architectures when operating in expressive low-dimensional embeddings; these are derived by combining the unsupervised approximation of linguistic Reproducing Kernel Hilbert Spaces and the so-called Universal Sentence Encoders. We experimentally evaluate the proposed approach over a semantic classification task, i.e., Question Classification, by considering different sizes of training material and different numbers of target classes. By applying such adversarial schema to a simple Multi-Layer Perceptron, a classifier trained over a subset derived from 1% of the original training material achieves 92% of accuracy. Moreover, when considering a complex classification schema, e.g., involving 50 classes, the proposed method outperforms state-of-the-art alternatives such as BERT.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Hao, Sun, Xiaolin Qin y Xiaojing Liu. "Learning hierarchical embedding space for image-text matching". Intelligent Data Analysis, 14 de septiembre de 2023, 1–19. http://dx.doi.org/10.3233/ida-230214.

Texto completo
Resumen
There are two mainstream strategies for image-text matching at present. The one, termed as joint embedding learning, aims to model the semantic information of both image and sentence in a shared feature subspace, which facilitates the measurement of semantic similarity but only focuses on global alignment relationship. To explore the local semantic relationship more fully, the other one, termed as metric learning, aims to learn a complex similarity function to directly output score of each image-text pair. However, it significantly suffers from more computation burden at retrieval stage. In this paper, we propose a hierarchically joint embedding model to incorporate the local semantic relationship into a joint embedding learning framework. The proposed method learns the shared local and global embedding spaces simultaneously, and models the joint local embedding space with respect to specific local similarity labels which are easy to access from the lexical information of corpus. Unlike the methods based on metric learning, we can prepare the fixed representations of both images and sentences by concatenating the normalized local and global representations, which makes it feasible to perform the efficient retrieval. And experiments show that the proposed model can achieve competitive performance when compared to the existing joint embedding learning models on two publicly available datasets Flickr30k and MS-COCO.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Liu, Gang, Yichao Dong, Kai Wang y Zhizheng Yan. "A cross-lingual sentence pair interaction feature capture model based on pseudo-corpus and multilingual embedding". AI Communications, 13 de abril de 2022, 1–14. http://dx.doi.org/10.3233/aic-210085.

Texto completo
Resumen
Recently, the emergence of the digital language division and the availability of cross-lingual benchmarks make researches of cross-lingual texts more popular. However, the performance of existing methods based on mapping relation are not good enough, because sometimes the structures of language spaces are not isomorphic. Besides, polysemy makes the extraction of interaction features hard. For cross-lingual word embedding, a model named Cross-lingual Word Embedding Space Based on Pseudo Corpus (CWE-PC) is proposed to obtain cross-lingual and multilingual word embedding. For cross-lingual sentence pair interaction feature capture, a Cross-language Feature Capture Based on Similarity Matrix (CFC-SM) model is built to extract cross-lingual interaction features. ELMo pretrained model and multiple layer convolution are used to alleviate polysemy and extract interaction features. These models are evaluated on multiple language pairs and results show that they outperform the state-of-the-art cross-lingual word embedding methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Litschko, Robert, Ivan Vulić, Simone Paolo Ponzetto y Goran Glavaš. "On cross-lingual retrieval with multilingual text encoders". Information Retrieval Journal, 7 de marzo de 2022. http://dx.doi.org/10.1007/s10791-022-09406-x.

Texto completo
Resumen
AbstractPretrained multilingual text encoders based on neural transformer architectures, such as multilingual BERT (mBERT) and XLM, have recently become a default paradigm for cross-lingual transfer of natural language processing models, rendering cross-lingual word embedding spaces (CLWEs) effectively obsolete. In this work we present a systematic empirical study focused on the suitability of the state-of-the-art multilingual encoders for cross-lingual document and sentence retrieval tasks across a number of diverse language pairs. We first treat these models as multilingual text encoders and benchmark their performance in unsupervised ad-hoc sentence- and document-level CLIR. In contrast to supervised language understanding, our results indicate that for unsupervised document-level CLIR—a setup with no relevance judgments for IR-specific fine-tuning—pretrained multilingual encoders on average fail to significantly outperform earlier models based on CLWEs. For sentence-level retrieval, we do obtain state-of-the-art performance: the peak scores, however, are met by multilingual encoders that have been further specialized, in a supervised fashion, for sentence understanding tasks, rather than using their vanilla ‘off-the-shelf’ variants. Following these results, we introduce localized relevance matching for document-level CLIR, where we independently score a query against document sections. In the second part, we evaluate multilingual encoders fine-tuned in a supervised fashion (i.e., we learn to rank) on English relevance data in a series of zero-shot language and domain transfer CLIR experiments. Our results show that, despite the supervision, and due to the domain and language shift, supervised re-ranking rarely improves the performance of multilingual transformers as unsupervised base rankers. Finally, only with in-domain contrastive fine-tuning (i.e., same domain, only language transfer), we manage to improve the ranking quality. We uncover substantial empirical differences between cross-lingual retrieval results and results of (zero-shot) cross-lingual transfer for monolingual retrieval in target languages, which point to “monolingual overfitting” of retrieval models trained on monolingual (English) data, even if they are based on multilingual transformers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Lee, Jun-Min y Tae-Bin Ha. "Unsupervised Text Embedding Space Generation Using Generative Adversarial Networks for Text Synthesis". Northern European Journal of Language Technology 9, n.º 1 (24 de octubre de 2023). http://dx.doi.org/10.3384/nejlt.2000-1533.2023.4855.

Texto completo
Resumen
Generative Adversarial Networks (GAN) is a model for data synthesis, which creates plausible data through the competition of generator and discriminator. Although GAN application to image synthesis is extensively studied, it has inherent limitations to natural language generation. Because natural language is composed of discrete tokens, a generator has difficulty updating its gradient through backpropagation; therefore, most text-GAN studies generate sentences starting with a random token based on a reward system. Thus, the generators of previous studies are pre-trained in an autoregressive way before adversarial training, causing data memorization that synthesized sentences reproduce the training data. In this paper, we synthesize sentences using a framework similar to the original GAN. More specifically, we propose Text Embedding Space Generative Adversarial Networks (TESGAN) which generate continuous text embedding spaces instead of discrete tokens to solve the gradient backpropagation problem. Furthermore, TESGAN conducts unsupervised learning which does not directly refer to the text of the training data to overcome the data memorization issue. By adopting this novel method, TESGAN can synthesize new sentences, showing the potential of unsupervised learning for text synthesis. We expect to see extended research combining Large Language Models with a new perspective of viewing text as an continuous space.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Amigó, Enrique, Alejandro Ariza-Casabona, Víctor Fresno y M. Antònia Martí. "Information Theory-based Compositional Distributional Semantics". Computational Linguistics, 5 de agosto de 2022, 1–41. http://dx.doi.org/10.1162/coli_a_00454.

Texto completo
Resumen
Abstract In the context of text representation, Compositional Distributional Semantics models aim to fuse the Distributional Hypothesis and the Principle of Compositionality. Text embedding is based on coocurrence distributions and the representations are in turn combined by compositional functions taking into account the text structure. However, the theoretical basis of compositional functions is still an open issue. In this paper we define and study the notion of Information Theory-based Compositional Distributional Semantics (ICDS): i) We first establish formal properties for embedding, composition and similarity functions based on Shannon’s Information Theory; ii) we analyse the existing approaches under this prism, checking whether or not they comply with the established desirable properties; iii) we propose two parameterisable composition and similarity functions that generalise traditional approaches while fulfilling the formal properties; and finally iv) we perform an empirical study on several textual similarity datasets that include sentences with a high and low lexical overlap, and on the similarity between words and their description. Our theoretical analysis and empirical results show that fulfilling formal properties affects positively the accuracy of text representation models in terms of correspondence (isometry) between the embedding and meaning spaces.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Zhang, Meng, Zhiwen Xie, Jin Liu, Xiao Liu, Xiao Yu y Bo Huang. "HyperED: A hierarchy‐aware network based on hyperbolic geometry for event detection". Computational Intelligence, 4 de enero de 2024. http://dx.doi.org/10.1111/coin.12627.

Texto completo
Resumen
AbstractEvent detection plays an essential role in the task of event extraction. It aims at identifying event trigger words in a sentence and classifying event types. Generally, multiple event types are usually well‐organized with a hierarchical structure in real‐world scenarios, and hierarchical correlations between event types can be used to enhance event detection performance. However, such kind of hierarchical information has received insufficient attention which can lead to misclassification between multiple event types. In addition, the most existing methods perform event detection in Euclidean space, which cannot adequately represent hierarchical relationships. To address these issues, we propose a novel event detection network HyperED which embeds the event context and types in Poincaré ball of hyperbolic geometry to help learn hierarchical features between events. Specifically, for the event detection context, we first leverage the pre‐trained BERT or BiLSTM in Euclidean space to learn the semantic features of ED sentences. Meanwhile, to make full use of the dependency knowledge, a GNN‐based model is applied when encoding event types to learn the correlations between events. Then we use a simple neural‐based transformation to project the embeddings into the Poincaré ball to capture hierarchical features, and a distance score in hyperbolic space is computed for prediction. The experiments on MAVEN and ACE 2005 datasets indicate the effectiveness of the HyperED model and prove the natural advantages of hyperbolic spaces in expressing hierarchies in an intuitive way.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Pszona, Maria, Maria Janicka, Grzegorz Wojdyga y Aleksander Wawer. "Towards universal methods for fake news detection". Natural Language Engineering, 26 de octubre de 2022, 1–39. http://dx.doi.org/10.1017/s1351324922000456.

Texto completo
Resumen
Abstract Fake news detection is an emerging topic that has attracted a lot of attention among researchers and in the industry. This paper focuses on fake news detection as a text classification problem: on the basis of five publicly available corpora with documents labeled as true or fake, the task was to automatically distinguish both classes without relying on fact-checking. The aim of our research was to test the feasibility of a universal model: one that produces satisfactory results on all data sets tested in our article. We attempted to do so by training a set of classification models on one collection and testing them on another. As it turned out, this resulted in a sharp performance degradation. Therefore, this paper focuses on finding the most effective approach to utilizing information in a transferable manner. We examined a variety of methods: feature selection, machine learning approaches to data set shift (instance re-weighting and projection-based), and deep learning approaches based on domain transfer. These methods were applied to various feature spaces: linguistic and psycholinguistic, embeddings obtained from the Universal Sentence Encoder, and GloVe embeddings. A detailed analysis showed that some combinations of these methods and selected feature spaces bring significant improvements. When using linguistic data, feature selection yielded the best overall mean improvement (across all train-test pairs) of 4%. Among the domain adaptation methods, the greatest improvement of 3% was achieved by subspace alignment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Zulqarnain, Muhammad, Rozaida Ghazali, Muhammad Ghulam Ghouse y Muhammad Faheem Mushtaq. "Efficient processing of GRU based on word embedding for text classification". JOIV : International Journal on Informatics Visualization 3, n.º 4 (9 de noviembre de 2019). http://dx.doi.org/10.30630/joiv.3.4.289.

Texto completo
Resumen
Text classification has become very serious problem for big organization to manage the large amount of online data and has been extensively applied in the tasks of Natural Language Processing (NLP). Text classification can support users to excellently manage and exploit meaningful information require to be classified into various categories for further use. In order to best classify texts, our research efforts to develop a deep learning approach which obtains superior performance in text classification than other RNNs approaches. However, the main problem in text classification is how to enhance the classification accuracy and the sparsity of the data semantics sensitivity to context often hinders the classification performance of texts. In order to overcome the weakness, in this paper we proposed unified structure to investigate the effects of word embedding and Gated Recurrent Unit (GRU) for text classification on two benchmark datasets included (Google snippets and TREC). GRU is a well-known type of recurrent neural network (RNN), which is ability of computing sequential data over its recurrent architecture. Experimentally, the semantically connected words are commonly near to each other in embedding spaces. First, words in posts are changed into vectors via word embedding technique. Then, the words sequential in sentences are fed to GRU to extract the contextual semantics between words. The experimental results showed that proposed GRU model can effectively learn the word usage in context of texts provided training data. The quantity and quality of training data significantly affected the performance. We evaluated the performance of proposed approach with traditional recurrent approaches, RNN, MV-RNN and LSTM, the proposed approach is obtained better results on two benchmark datasets in the term of accuracy and error rate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Dikow, Rebecca, Corey DiPietro, Michael Trizna, Hanna BredenbeckCorp, Madeline Bursell, Jenna Ekwealor, Richard Hodel et al. "Developing responsible AI practices at the Smithsonian Institution". Research Ideas and Outcomes 9 (25 de octubre de 2023). http://dx.doi.org/10.3897/rio.9.e113334.

Texto completo
Resumen
Applications of artificial intelligence (AI) and machine learning (ML) have become pervasive in our everyday lives. These applications range from the mundane (asking ChatGPT to write a thank you note) to high-end science (predicting future weather patterns in the face of climate change), but, because they rely on human-generated or mediated data, they also have the potential to perpetuate systemic oppression and racism. For museums and other cultural heritage institutions, there is great interest in automating the kinds of applications at which AI and ML can excel, for example, tasks in computer vision including image segmentation, object recognition (labelling or identifying objects in an image) and natural language processing (e.g. named-entity recognition, topic modelling, generation of word and sentence embeddings) in order to make digital collections and archives discoverable, searchable and appropriately tagged. A coalition of staff, Fellows and interns working in digital spaces at the Smithsonian Institution, who are either engaged with research using AI or ML tools or working closely with digital data in other ways, came together to discuss the promise and potential perils of applying AI and ML at scale and this work results from those conversations. Here, we present the process that has led to the development of an AI Values Statement and an implementation plan, including the release of datasets with accompanying documentation to enable these data to be used with improved context and reproducibility (dataset cards). We plan to continue releasing dataset cards and for AI and ML applications, model cards, in order to enable informed usage of Smithsonian data and research products.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía