Siga este enlace para ver otros tipos de publicaciones sobre el tema: Cross-Lingual knowledge transfer.

Artículos de revistas sobre el tema "Cross-Lingual knowledge transfer"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Cross-Lingual knowledge transfer".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Wang, Yabing, Fan Wang, Jianfeng Dong y Hao Luo. "CL2CM: Improving Cross-Lingual Cross-Modal Retrieval via Cross-Lingual Knowledge Transfer". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 6 (24 de marzo de 2024): 5651–59. http://dx.doi.org/10.1609/aaai.v38i6.28376.

Texto completo
Resumen
Cross-lingual cross-modal retrieval has garnered increasing attention recently, which aims to achieve the alignment between vision and target language (V-T) without using any annotated V-T data pairs. Current methods employ machine translation (MT) to construct pseudo-parallel data pairs, which are then used to learn a multi-lingual and multi-modal embedding space that aligns visual and target-language representations. However, the large heterogeneous gap between vision and text, along with the noise present in target language translations, poses significant challenges in effectively aligning their representations. To address these challenges, we propose a general framework, Cross-Lingual to Cross-Modal (CL2CM), which improves the alignment between vision and target language using cross-lingual transfer. This approach allows us to fully leverage the merits of multi-lingual pre-trained models (e.g., mBERT) and the benefits of the same modality structure, i.e., smaller gap, to provide reliable and comprehensive semantic correspondence (knowledge) for the cross-modal network. We evaluate our proposed approach on two multilingual image-text datasets, Multi30K and MSCOCO, and one video-text dataset, VATEX. The results clearly demonstrate the effectiveness of our proposed method and its high potential for large-scale retrieval.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Abhishek Singhal, Happa Khan, Aditya Sharma. "Empowering Multilingual AI: Cross-Lingual Transfer Learning". Tuijin Jishu/Journal of Propulsion Technology 43, n.º 4 (26 de noviembre de 2023): 284–87. http://dx.doi.org/10.52783/tjjpt.v43.i4.2353.

Texto completo
Resumen
Multilingual Natural Language Processing (NLP) and Cross-Lingual Transfer Learning have emerged as pivotal fields in the realm of language technology. This abstract explores the essential concepts and methodologies behind these areas, shedding light on their significance in a world characterized by linguistic diversity. Multilingual NLP enables machines to process global collaboration. Cross-lingual transfer learning, on the other hand, leverages knowledge from one language to enhance NLP tasks in another, facilitating efficient resource utilization and improved model performance. The abstract highlights the growing relevance of these approaches in a multilingual and interconnected world, underscoring their potential to reshape the future of natural language understanding and communication.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Zhang, Mozhi, Yoshinari Fujinuma y Jordan Boyd-Graber. "Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 9547–54. http://dx.doi.org/10.1609/aaai.v34i05.6500.

Texto completo
Resumen
Text classification must sometimes be applied in a low-resource language with no labeled training data. However, training data may be available in a related language. We investigate whether character-level knowledge transfer from a related language helps text classification. We present a cross-lingual document classification framework (caco) that exploits cross-lingual subword similarity by jointly training a character-based embedder and a word-based classifier. The embedder derives vector representations for input words from their written forms, and the classifier makes predictions based on the word vectors. We use a joint character representation for both the source language and the target language, which allows the embedder to generalize knowledge about source language words to target language words with similar forms. We propose a multi-task objective that can further improve the model if additional cross-lingual or monolingual resources are available. Experiments confirm that character-level knowledge transfer is more data-efficient than word-level transfer between related languages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Colhon, Mihaela. "Language engineering for syntactic knowledge transfer". Computer Science and Information Systems 9, n.º 3 (2012): 1231–47. http://dx.doi.org/10.2298/csis120130032c.

Texto completo
Resumen
In this paper we present a method for an English-Romanian treebank construction, together with the obtained evaluation results. The treebank is built upon a parallel English-Romanian corpus word-aligned and annotated at the morphological and syntactic level. The syntactic trees of the Romanian texts are generated by considering the syntactic phrases of the English parallel texts automatically resulted from syntactic parsing. The method reuses and adjusts existing tools and algorithms for cross-lingual transfer of syntactic constituents and syntactic trees alignment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhan, Qingran, Xiang Xie, Chenguang Hu, Juan Zuluaga-Gomez, Jing Wang y Haobo Cheng. "Domain-Adversarial Based Model with Phonological Knowledge for Cross-Lingual Speech Recognition". Electronics 10, n.º 24 (20 de diciembre de 2021): 3172. http://dx.doi.org/10.3390/electronics10243172.

Texto completo
Resumen
Phonological-based features (articulatory features, AFs) describe the movements of the vocal organ which are shared across languages. This paper investigates a domain-adversarial neural network (DANN) to extract reliable AFs, and different multi-stream techniques are used for cross-lingual speech recognition. First, a novel universal phonological attributes definition is proposed for Mandarin, English, German and French. Then a DANN-based AFs detector is trained using source languages (English, German and French). When doing the cross-lingual speech recognition, the AFs detectors are used to transfer the phonological knowledge from source languages (English, German and French) to the target language (Mandarin). Two multi-stream approaches are introduced to fuse the acoustic features and cross-lingual AFs. In addition, the monolingual AFs system (i.e., the AFs are directly extracted from the target language) is also investigated. Experiments show that the performance of the AFs detector can be improved by using convolutional neural networks (CNN) with a domain-adversarial learning method. The multi-head attention (MHA) based multi-stream can reach the best performance compared to the baseline, cross-lingual adaptation approach, and other approaches. More specifically, the MHA-mode with cross-lingual AFs yields significant improvements over monolingual AFs with the restriction of training data size and, which can be easily extended to other low-resource languages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Xu, Zenan, Linjun Shou, Jian Pei, Ming Gong, Qinliang Su, Xiaojun Quan y Daxin Jiang. "A Graph Fusion Approach for Cross-Lingual Machine Reading Comprehension". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 11 (26 de junio de 2023): 13861–68. http://dx.doi.org/10.1609/aaai.v37i11.26623.

Texto completo
Resumen
Although great progress has been made for Machine Reading Comprehension (MRC) in English, scaling out to a large number of languages remains a huge challenge due to the lack of large amounts of annotated training data in non-English languages. To address this challenge, some recent efforts of cross-lingual MRC employ machine translation to transfer knowledge from English to other languages, through either explicit alignment or implicit attention. For effective knowledge transition, it is beneficial to leverage both semantic and syntactic information. However, the existing methods fail to explicitly incorporate syntax information in model learning. Consequently, the models are not robust to errors in alignment and noises in attention. In this work, we propose a novel approach, which jointly models the cross-lingual alignment information and the mono-lingual syntax information using a graph. We develop a series of algorithms, including graph construction, learning, and pre-training. The experiments on two benchmark datasets for cross-lingual MRC show that our approach outperforms all strong baselines, which verifies the effectiveness of syntax information for cross-lingual MRC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Rijhwani, Shruti, Jiateng Xie, Graham Neubig y Jaime Carbonell. "Zero-Shot Neural Transfer for Cross-Lingual Entity Linking". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 6924–31. http://dx.doi.org/10.1609/aaai.v33i01.33016924.

Texto completo
Resumen
Cross-lingual entity linking maps an entity mention in a source language to its corresponding entry in a structured knowledge base that is in a different (target) language. While previous work relies heavily on bilingual lexical resources to bridge the gap between the source and the target languages, these resources are scarce or unavailable for many low-resource languages. To address this problem, we investigate zero-shot cross-lingual entity linking, in which we assume no bilingual lexical resources are available in the source low-resource language. Specifically, we propose pivot-basedentity linking, which leverages information from a highresource “pivot” language to train character-level neural entity linking models that are transferred to the source lowresource language in a zero-shot manner. With experiments on 9 low-resource languages and transfer through a total of54 languages, we show that our proposed pivot-based framework improves entity linking accuracy 17% (absolute) on average over the baseline systems, for the zero-shot scenario.1 Further, we also investigate the use of language-universal phonological representations which improves average accuracy (absolute) by 36% when transferring between languages that use different scripts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Bari, M. Saiful, Shafiq Joty y Prathyusha Jwalapuram. "Zero-Resource Cross-Lingual Named Entity Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 7415–23. http://dx.doi.org/10.1609/aaai.v34i05.6237.

Texto completo
Resumen
Recently, neural methods have achieved state-of-the-art (SOTA) results in Named Entity Recognition (NER) tasks for many languages without the need for manually crafted features. However, these models still require manually annotated training data, which is not available for many languages. In this paper, we propose an unsupervised cross-lingual NER model that can transfer NER knowledge from one language to another in a completely unsupervised way without relying on any bilingual dictionary or parallel data. Our model achieves this through word-level adversarial learning and augmented fine-tuning with parameter sharing and feature augmentation. Experiments on five different languages demonstrate the effectiveness of our approach, outperforming existing models by a good margin and setting a new SOTA for each language pair.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Qi, Kunxun y Jianfeng Du. "Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8632–39. http://dx.doi.org/10.1609/aaai.v34i05.6387.

Texto completo
Resumen
Cross-lingual natural language inference is a fundamental task in cross-lingual natural language understanding, widely addressed by neural models recently. Existing neural model based methods either align sentence embeddings between source and target languages, heavily relying on annotated parallel corpora, or exploit pre-trained cross-lingual language models that are fine-tuned on a single language and hard to transfer knowledge to another language. To resolve these limitations in existing methods, this paper proposes an adversarial training framework to enhance both pre-trained models and classical neural models for cross-lingual natural language inference. It trains on the union of data in the source language and data in the target language, learning language-invariant features to improve the inference performance. Experimental results on the XNLI benchmark demonstrate that three popular neural models enhanced by the proposed framework significantly outperform the original models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Zhang, Weizhao y Hongwu Yang. "Meta-Learning for Mandarin-Tibetan Cross-Lingual Speech Synthesis". Applied Sciences 12, n.º 23 (28 de noviembre de 2022): 12185. http://dx.doi.org/10.3390/app122312185.

Texto completo
Resumen
The paper proposes a meta-learning-based Mandarin-Tibetan cross-lingual text-to-speech (TTS) to realize both Mandarin and Tibetan speech synthesis under a unique framework. First, we build two kinds of Tacotron2-based Mandarin-Tibetan cross-lingual baseline TTS. One is a shared encoder Mandarin-Tibetan cross-lingual TTS, and another is a separate encoder Mandarin-Tibetan cross-lingual TTS. Both baseline TTS use the speaker classifier with a gradient reversal layer to disentangle speaker-specific information from the text encoder. At the same time, we design a prosody generator to extract prosodic information from sentences to explore syntactic and semantic information adequately. To further improve the synthesized speech quality of the Tacotron2-based Mandarin-Tibetan cross-lingual TTS, we propose a meta-learning-based Mandarin-Tibetan cross-lingual TTS. Based on the separate encoder Mandarin-Tibetan cross-lingual TTS, we use an additional dynamic network to predict the parameters of the language-dependent text encoder that could realize better cross-lingual knowledge sharing in the sequence-to-sequence TTS. Lastly, we synthesize Mandarin or Tibetan speech through the unique acoustic model. The baseline experimental results show that the separate encoder Mandarin-Tibetan cross-lingual TTS could handle the input of different languages better than the shared encoder Mandarin-Tibetan cross-lingual TTS. The experimental results further show that the proposed meta-learning-based Mandarin-Tibetan cross-lingual speech synthesis method could effectively improve the voice quality of synthesized speech in terms of naturalness and speaker similarity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Chen, Xilun, Yu Sun, Ben Athiwaratkun, Claire Cardie y Kilian Weinberger. "Adversarial Deep Averaging Networks for Cross-Lingual Sentiment Classification". Transactions of the Association for Computational Linguistics 6 (diciembre de 2018): 557–70. http://dx.doi.org/10.1162/tacl_a_00039.

Texto completo
Resumen
In recent years great success has been achieved in sentiment classification for English, thanks in part to the availability of copious annotated resources. Unfortunately, most languages do not enjoy such an abundance of labeled data. To tackle the sentiment classification problem in low-resource languages without adequate annotated data, we propose an Adversarial Deep Averaging Network (ADAN 1 ) to transfer the knowledge learned from labeled data on a resource-rich source language to low-resource languages where only unlabeled data exist. ADAN has two discriminative branches: a sentiment classifier and an adversarial language discriminator. Both branches take input from a shared feature extractor to learn hidden representations that are simultaneously indicative for the classification task and invariant across languages. Experiments on Chinese and Arabic sentiment classification demonstrate that ADAN significantly outperforms state-of-the-art systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Wang, Mengqiu y Christopher D. Manning. "Cross-lingual Projected Expectation Regularization for Weakly Supervised Learning". Transactions of the Association for Computational Linguistics 2 (diciembre de 2014): 55–66. http://dx.doi.org/10.1162/tacl_a_00165.

Texto completo
Resumen
We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F1 scores of 64% and 60% when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Fang, Yuwei, Shuohang Wang, Zhe Gan, Siqi Sun y Jingjing Liu. "FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 14 (18 de mayo de 2021): 12776–84. http://dx.doi.org/10.1609/aaai.v35i14.17512.

Texto completo
Resumen
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic cross-lingual alignment between different languages that proves essential for multilingual tasks. In this paper, we propose FILTER, an enhanced fusion method that takes cross-lingual data as input for XLM finetuning. Specifically, FILTER first encodes text input in the source language and its translation in the target language independently in the shallow layers, then performs cross-language fusion to extract multilingual knowledge in the intermediate layers, and finally performs further language-specific encoding. During inference, the model makes predictions based on the text input in the target language and its translation in the source language. For simple tasks such as classification, translated text in the target language shares the same label as the source language. However, this shared label becomes less accurate or even unavailable for more complex tasks such as question answering, NER and POS tagging. To tackle this issue, we further propose an additional KL-divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language. Extensive experiments demonstrate that FILTER achieves new state of the art on two challenging multilingual multi-task benchmarks, XTREME and XGLUE.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Ge, Ling, Chunming Hu, Guanghui Ma, Jihong Liu y Hong Zhang. "Discrepancy and Uncertainty Aware Denoising Knowledge Distillation for Zero-Shot Cross-Lingual Named Entity Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 16 (24 de marzo de 2024): 18056–64. http://dx.doi.org/10.1609/aaai.v38i16.29762.

Texto completo
Resumen
The knowledge distillation-based approaches have recently yielded state-of-the-art (SOTA) results for cross-lingual NER tasks in zero-shot scenarios. These approaches typically employ a teacher network trained with the labelled source (rich-resource) language to infer pseudo-soft labels for the unlabelled target (zero-shot) language, and force a student network to approximate these pseudo labels to achieve knowledge transfer. However, previous works have rarely discussed the issue of pseudo-label noise caused by the source-target language gap, which can mislead the training of the student network and result in negative knowledge transfer. This paper proposes an discrepancy and uncertainty aware Denoising Knowledge Distillation model (DenKD) to tackle this issue. Specifically, DenKD uses a discrepancy-aware denoising representation learning method to optimize the class representations of the target language produced by the teacher network, thus enhancing the quality of pseudo labels and reducing noisy predictions. Further, DenKD employs an uncertainty-aware denoising method to quantify the pseudo-label noise and adjust the focus of the student network on different samples during knowledge distillation, thereby mitigating the noise's adverse effects. We conduct extensive experiments on 28 languages including 4 languages not covered by the pre-trained models, and the results demonstrate the effectiveness of our DenKD.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Wu, Tianxing, Chaoyu Gao, Lin Li y Yuxiang Wang. "Leveraging Multi-Modal Information for Cross-Lingual Entity Matching across Knowledge Graphs". Applied Sciences 12, n.º 19 (8 de octubre de 2022): 10107. http://dx.doi.org/10.3390/app121910107.

Texto completo
Resumen
In recent years, the scale of knowledge graphs and the number of entities have grown rapidly. Entity matching across different knowledge graphs has become an urgent problem to be solved for knowledge fusion. With the importance of entity matching being increasingly evident, the use of representation learning technologies to find matched entities has attracted extensive attention due to the computability of vector representations. However, existing studies on representation learning technologies cannot make full use of knowledge graph relevant multi-modal information. In this paper, we propose a new cross-lingual entity matching method (called CLEM) with knowledge graph representation learning on rich multi-modal information. The core is the multi-view intact space learning method to integrate embeddings of multi-modal information for matching entities. Experimental results on cross-lingual datasets show the superiority and competitiveness of our proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Zhang, Weitai, Lirong Dai, Junhua Liu y Shijin Wang. "Improving Many-to-Many Neural Machine Translation via Selective and Aligned Online Data Augmentation". Applied Sciences 13, n.º 6 (20 de marzo de 2023): 3946. http://dx.doi.org/10.3390/app13063946.

Texto completo
Resumen
Multilingual neural machine translation (MNMT) models are theoretically attractive for low- and zero-resource language pairs with the impact of cross-lingual knowledge transfer. Existing approaches mainly focus on English-centric directions and always underperform compared to their pivot-based counterparts for non-English directions. In this work, we aim to build a many-to-many MNMT system with an emphasis on the quality of non-English directions by exploring selective and aligned online data augmentation algorithms. Based on our findings showing that the augmented synthetic samples are not “the more, the better” we propose selective online back-translation (SOBT) and thoroughly study different selection criteria to pick suitable samples for training. Furthermore, we boost SOBT with cross-lingual online substitution (CLOS) to align token representations and encourage transfer learning. Our intuition is based on the hypothesis that a universal cross-lingual representation leads to a better multilingual translation performance, especially for non-English directions. Comparing to previous state-of-the-art many-to-many MNMT models and conventional pivot-based methods, experiments on IWSLT2014 and OPUS-100 translation benchmarks show that our approach achieves a competitive or even better performance on English-centric directions and achieves up to ∼12 BLEU for non-English directions. All of our models and codes are publicly available.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Cui, Ruixiang, Rahul Aralikatte, Heather Lent y Daniel Hershcovich. "Compositional Generalization in Multilingual Semantic Parsing over Wikidata". Transactions of the Association for Computational Linguistics 10 (2022): 937–55. http://dx.doi.org/10.1162/tacl_a_00499.

Texto completo
Resumen
Abstract Semantic parsing (SP) allows humans to leverage vast knowledge resources through natural interaction. However, parsers are mostly designed for and evaluated on English resources, such as CFQ (Keysers et al., 2020), the current standard benchmark based on English data generated from grammar rules and oriented towards Freebase, an outdated knowledge base. We propose a method for creating a multilingual, parallel dataset of question-query pairs, grounded in Wikidata. We introduce such a dataset, which we call Multilingual Compositional Wikidata Questions (MCWQ), and use it to analyze the compositional generalization of semantic parsers in Hebrew, Kannada, Chinese, and English. While within- language generalization is comparable across languages, experiments on zero-shot cross- lingual transfer demonstrate that cross-lingual compositional generalization fails, even with state-of-the-art pretrained multilingual encoders. Furthermore, our methodology, dataset, and results will facilitate future research on SP in more realistic and diverse settings than has been possible with existing resources.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

He, Keqing, Weiran Xu y Yuanmeng Yan. "Multi-Level Cross-Lingual Transfer Learning With Language Shared and Specific Knowledge for Spoken Language Understanding". IEEE Access 8 (2020): 29407–16. http://dx.doi.org/10.1109/access.2020.2972925.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Zhou, Shuyan, Shruti Rijhwani, John Wieting, Jaime Carbonell y Graham Neubig. "Improving Candidate Generation for Low-resource Cross-lingual Entity Linking". Transactions of the Association for Computational Linguistics 8 (julio de 2020): 109–24. http://dx.doi.org/10.1162/tacl_a_00303.

Texto completo
Resumen
Cross-lingual entity linking (XEL) is the task of finding referents in a target-language knowledge base (KB) for mentions extracted from source-language texts. The first step of (X)EL is candidate generation, which retrieves a list of plausible candidate entities from the target-language KB for each mention. Approaches based on resources from Wikipedia have proven successful in the realm of relatively high-resource languages, but these do not extend well to low-resource languages with few, if any, Wikipedia pages. Recently, transfer learning methods have been shown to reduce the demand for resources in the low-resource languages by utilizing resources in closely related languages, but the performance still lags far behind their high-resource counterparts. In this paper, we first assess the problems faced by current entity candidate generation methods for low-resource XEL, then propose three improvements that (1) reduce the disconnect between entity mentions and KB entries, and (2) improve the robustness of the model to low-resource scenarios. The methods are simple, but effective: We experiment with our approach on seven XEL datasets and find that they yield an average gain of 16.9% in Top-30 gold candidate recall, compared with state-of-the-art baselines. Our improved model also yields an average gain of 7.9% in in-KB accuracy of end-to-end XEL. 1
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Ge, Ling, Chunming Hu, Guanghui Ma, Hong Zhang y Jihong Liu. "ProKD: An Unsupervised Prototypical Knowledge Distillation Network for Zero-Resource Cross-Lingual Named Entity Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 11 (26 de junio de 2023): 12818–26. http://dx.doi.org/10.1609/aaai.v37i11.26507.

Texto completo
Resumen
For named entity recognition (NER) in zero-resource languages, utilizing knowledge distillation methods to transfer language-independent knowledge from the rich-resource source languages to zero-resource languages is an effective means. Typically, these approaches adopt a teacher-student architecture, where the teacher network is trained in the source language, and the student network seeks to learn knowledge from the teacher network and is expected to perform well in the target language. Despite the impressive performance achieved by these methods, we argue that they have two limitations. Firstly, the teacher network fails to effectively learn language-independent knowledge shared across languages due to the differences in the feature distribution between the source and target languages. Secondly, the student network acquires all of its knowledge from the teacher network and ignores the learning of target language-specific knowledge. Undesirably, these limitations would hinder the model's performance in the target language. This paper proposes an unsupervised prototype knowledge distillation network (ProKD) to address these issues. Specifically, ProKD presents a contrastive learning-based prototype alignment method to achieve class feature alignment by adjusting the prototypes' distance from the source and target languages, boosting the teacher network's capacity to acquire language-independent knowledge. In addition, ProKD introduces a prototype self-training method to learn the intrinsic structure of the language by retraining the student network on the target data using samples' distance information from prototypes, thereby enhancing the student network's ability to acquire language-specific knowledge. Extensive experiments on three benchmark cross-lingual NER datasets demonstrate the effectiveness of our approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Du, Yangkai, Tengfei Ma, Lingfei Wu, Xuhong Zhang y Shouling Ji. "AdaCCD: Adaptive Semantic Contrasts Discovery Based Cross Lingual Adaptation for Code Clone Detection". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 16 (24 de marzo de 2024): 17942–50. http://dx.doi.org/10.1609/aaai.v38i16.29749.

Texto completo
Resumen
Code Clone Detection, which aims to retrieve functionally similar programs from large code bases, has been attracting increasing attention. Modern software often involves a diverse range of programming languages. However, current code clone detection methods are generally limited to only a few popular programming languages due to insufficient annotated data as well as their own model design constraints. To address these issues, we present AdaCCD, a novel cross-lingual adaptation method that can detect cloned codes in a new language without annotations in that language. AdaCCD leverages language-agnostic code representations from pre-trained programming language models and propose an Adaptively Refined Contrastive Learning framework to transfer knowledge from resource-rich languages to resource-poor languages. We evaluate the cross-lingual adaptation results of AdaCCD by constructing a multilingual code clone detection benchmark consisting of 5 programming languages. AdaCCD achieves significant improvements over other baselines, and achieve comparable performance to supervised fine-tuning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Ge, Ling, Chunming Hu, Guanghui Ma, Jihong Liu y Hong Zhang. "DA-Net: A Disentangled and Adaptive Network for Multi-Source Cross-Lingual Transfer Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 16 (24 de marzo de 2024): 18047–55. http://dx.doi.org/10.1609/aaai.v38i16.29761.

Texto completo
Resumen
Multi-Source cross-lingual transfer learning deals with the transfer of task knowledge from multiple labelled source languages to an unlabeled target language under the language shift. Existing methods typically focus on weighting the predictions produced by language-specific classifiers of different sources that follow a shared encoder. However, all source languages share the same encoder, which is updated by all these languages. The extracted representations inevitably contain different source languages' information, which may disturb the learning of the language-specific classifiers. Additionally, due to the language gap, language-specific classifiers trained with source labels are unable to make accurate predictions for the target language. Both facts impair the model's performance. To address these challenges, we propose a Disentangled and Adaptive Network ~(DA-Net). Firstly, we devise a feedback-guided collaborative disentanglement method that seeks to purify input representations of classifiers, thereby mitigating mutual interference from multiple sources. Secondly, we propose a class-aware parallel adaptation method that aligns class-level distributions for each source-target language pair, thereby alleviating the language pairs' language gap. Experimental results on three different tasks involving 38 languages validate the effectiveness of our approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Wu, Qianhui, Zijia Lin, Guoxin Wang, Hui Chen, Börje F. Karlsson, Biqing Huang y Chin-Yew Lin. "Enhanced Meta-Learning for Cross-Lingual Named Entity Recognition with Minimal Resources". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 9274–81. http://dx.doi.org/10.1609/aaai.v34i05.6466.

Texto completo
Resumen
For languages with no annotated resources, transferring knowledge from rich-resource languages is an effective solution for named entity recognition (NER). While all existing methods directly transfer from source-learned model to a target language, in this paper, we propose to fine-tune the learned model with a few similar examples given a test case, which could benefit the prediction by leveraging the structural and semantic information conveyed in such similar examples. To this end, we present a meta-learning algorithm to find a good model parameter initialization that could fast adapt to the given test case and propose to construct multiple pseudo-NER tasks for meta-training by computing sentence similarities. To further improve the model's generalization ability across different languages, we introduce a masking scheme and augment the loss function with an additional maximum term during meta-training. We conduct extensive experiments on cross-lingual named entity recognition with minimal resources over five target languages. The results show that our approach significantly outperforms existing state-of-the-art methods across the board.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Pasini, Tommaso, Alessandro Raganato y Roberto Navigli. "XL-WSD: An Extra-Large and Cross-Lingual Evaluation Framework for Word Sense Disambiguation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 15 (18 de mayo de 2021): 13648–56. http://dx.doi.org/10.1609/aaai.v35i15.17609.

Texto completo
Resumen
Transformer-based architectures brought a breeze of change to Word Sense Disambiguation (WSD), improving models' performances by a large margin. The fast development of new approaches has been further encouraged by a well-framed evaluation suite for English, which has allowed their performances to be kept track of and compared fairly. However, other languages have remained largely unexplored, as testing data are available for a few languages only and the evaluation setting is rather matted. In this paper, we untangle this situation by proposing XL-WSD, a cross-lingual evaluation benchmark for the WSD task featuring sense-annotated development and test sets in 18 languages from six different linguistic families, together with language-specific silver training data. We leverage XL-WSD datasets to conduct an extensive evaluation of neural and knowledge-based approaches, including the most recent multilingual language models. Results show that the zero-shot knowledge transfer across languages is a promising research direction within the WSD field, especially when considering low-resourced languages where large pre-trained multilingual models still perform poorly. We make the evaluation suite and the code for performing the experiments available at https://sapienzanlp.github.io/xl-wsd/.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Chen, Muzi. "Analysis on Transfer Learning Models and Applications in Natural Language Processing". Highlights in Science, Engineering and Technology 16 (10 de noviembre de 2022): 446–52. http://dx.doi.org/10.54097/hset.v16i.2609.

Texto completo
Resumen
Assumptions have been established that many machine learning algorithms expect the training data and the testing data to share the same feature space or distribution. Thus, transfer learning (TL) rises due to the tolerance of the different feature spaces and the distribution of data. It is an optimization to improve performance from task to task. This paper includes the basic knowledge of transfer learning and summarizes some relevant experimental results of popular applications using transfer learning in the natural language processing (NLP) field. The mathematical definition of TL is briefly mentioned. After that, basic knowledge including the different categories of TL, and the comparison between TL and traditional machine learning models is introduced. Then, some applications which mainly focus on question answering, cyberbullying detection, and sentiment analysis will be presented. Other applications will also be briefly introduced such as Named Entity Recognition (NER), Intent Classification, and Cross-Lingual Learning, etc. For each application, this study provides reference on transfer learning models for related researches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Ebrahimi, Mohammadreza, Yidong Chai, Sagar Samtani y Hsinchun Chen. "Cross-Lingual Cybersecurity Analytics in the International Dark Web with Adversarial Deep Representation Learning". MIS Quarterly 46, n.º 2 (19 de mayo de 2022): 1209–26. http://dx.doi.org/10.25300/misq/2022/16618.

Texto completo
Resumen
International dark web platforms operating within multiple geopolitical regions and languages host a myriad of hacker assets such as malware, hacking tools, hacking tutorials, and malicious source code. Cybersecurity analytics organizations employ machine learning models trained on human-labeled data to automatically detect these assets and bolster their situational awareness. However, the lack of human-labeled training data is prohibitive when analyzing foreign-language dark web content. In this research note, we adopt the computational design science paradigm to develop a novel IT artifact for cross-lingual hacker asset detection (CLHAD). CLHAD automatically leverages the knowledge learned from English content to detect hacker assets in non-English dark web platforms. CLHAD encompasses a novel Adversarial deep representation learning (ADREL) method, which generates multilingual text representations using generative adversarial networks (GANs). Drawing upon the state of the art in cross-lingual knowledge transfer, ADREL is a novel approach to automatically extract transferable text representations and facilitate the analysis of multilingual content. We evaluate CLHAD on Russian, French, and Italian dark web platforms and demonstrate its practical utility in hacker asset profiling, and conduct a proof-of-concept case study. Our analysis suggests that cybersecurity managers may benefit more from focusing on Russian to identify sophisticated hacking assets. In contrast, financial hacker assets are scattered among several dominant dark web languages. Managerial insights for security managers are discussed at operational and strategic levels.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Chen, Nuo, Linjun Shou, Ming Gong y Jian Pei. "From Good to Best: Two-Stage Training for Cross-Lingual Machine Reading Comprehension". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 10501–8. http://dx.doi.org/10.1609/aaai.v36i10.21293.

Texto completo
Resumen
Cross-lingual Machine Reading Comprehension (xMRC) is a challenging task due to the lack of training data in low-resource languages. Recent approaches use training data only in a resource-rich language (such as English) to fine-tune large-scale cross-lingual pre-trained language models, which transfer knowledge from resource-rich languages (source) to low-resource languages (target). Due to the big difference between languages, the model fine-tuned only by the source language may not perform well for target languages. In our study, we make an interesting observation that while the top 1 result predicted by the previous approaches may often fail to hit the ground-truth answer, there are still good chances for the correct answer to be contained in the set of top k predicted results. Intuitively, the previous approaches have empowered the model certain level of capability to roughly distinguish good answers from bad ones. However, without sufficient training data, it is not powerful enough to capture the nuances between the accurate answer and those approximate ones. Based on this observation, we develop a two-stage approach to enhance the model performance. The first stage targets at recall; we design a hard-learning (HL) algorithm to maximize the likelihood that the top k predictions contain the accurate answer. The second stage focuses on precision, where an answer-aware contrastive learning (AA-CL) mechanism is developed to learn the minute difference between the accurate answer and other candidates. Extensive experiments show that our model significantly outperforms strong baselines on two cross-lingual MRC benchmark datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Jiang, Aiqi y Arkaitz Zubiaga. "SexWEs: Domain-Aware Word Embeddings via Cross-Lingual Semantic Specialisation for Chinese Sexism Detection in Social Media". Proceedings of the International AAAI Conference on Web and Social Media 17 (2 de junio de 2023): 447–58. http://dx.doi.org/10.1609/icwsm.v17i1.22159.

Texto completo
Resumen
The goal of sexism detection is to mitigate negative online content targeting certain gender groups of people. However, the limited availability of labeled sexism-related datasets makes it problematic to identify online sexism for low-resource languages. In this paper, we address the task of automatic sexism detection in social media for one low-resource language -- Chinese. Rather than collecting new sexism data or building cross-lingual transfer learning models, we develop a cross-lingual domain-aware semantic specialisation system in order to make the most of existing data. Semantic specialisation is a technique for retrofitting pre-trained distributional word vectors by integrating external linguistic knowledge (such as lexico-semantic relations) into the specialised feature space. To do this, we leverage semantic resources for sexism from a high-resource language (English) to specialise pre-trained word vectors in the target language (Chinese) to inject domain knowledge. We demonstrate the benefit of our sexist word embeddings (SexWEs) specialised by our framework via intrinsic evaluation of word similarity and extrinsic evaluation of sexism detection. Compared with other specialisation approaches and Chinese baseline word vectors, our SexWEs shows an average score improvement of 0.033 and 0.064 in both intrinsic and extrinsic evaluations, respectively. The ablative results and visualisation of SexWEs also prove the effectiveness of our framework on retrofitting word vectors in low-resource languages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Pajoohesh, Parto. "A Probe into Lexical Depth: What is the Direction of Transfer for L1 Literacy and L2 Development?" Heritage Language Journal 5, n.º 1 (30 de junio de 2007): 117–46. http://dx.doi.org/10.46538/hlj.5.1.6.

Texto completo
Resumen
This paper examines the intersection between heritage language (HL) learning and the development of English and Farsi deep lexical knowledge. The study compares two groups of Farsi-English bilingual children with different HL educational backgrounds and a group of English-only children by testing their paradigmatic-syntagmatic knowledge of words. A statistical analysis of the children's deep lexical knowledge was conducted in light of their HL literacy experience, second language (English) schooling, and length of residence. The findings revealed that longer length of residence and L2 schooling correlates with better performance on the L2 measures of lexical depth, whereas longer residence in the home country and first language (L1) formal schooling do not correlate with better performance on the Farsi measures. The study concluded that, in the long term, the learning of a heritage language, in combination with L2 academic instruction is more effective to the cross-lingual transfer of academic skills.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

S, Tarun. "Bridging Languages through Images: A Multilingual Text-to-Image Synthesis Approach". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n.º 05 (11 de mayo de 2024): 1–5. http://dx.doi.org/10.55041/ijsrem33773.

Texto completo
Resumen
This research investigates the challenges posed by the predominant focus on English language text-to-image generation (TTI) because of the lack of annotated image caption data in other languages. The resulting inequitable access to TTI technology in non-English-speaking regions motivates the research of multilingual TTI (mTTI) and the potential of neural machine translation (NMT) to facilitate its development. The study presents two main contributions. Firstly, a systematic empirical study employing a multilingual multi-modal encoder evaluates standard cross-lingual NLP methods applied to mTTI, including TRANSLATE TRAIN, TRANSLATE TEST, and ZERO-SHOT TRANSFER. Secondly, a novel parameter-efficient approach called Ensemble Adapter (ENSAD) is introduced, leveraging multilingual text knowledge within the mTTI framework to avoid the language gap and enhance mTTI performance. Additionally, the research addresses challenges associated with transformer-based TTI models, such as slow generation and complexity for high-resolution images. It proposes hierarchical transformers and local parallel autoregressive generation techniques to overcome these limitations. A 6B-parameter transformer pretrained with a cross-modal general language model (CogLM) and fine-tuned for fast super-resolution results in a new text-to-image system, denoted as It, which demonstrates competitive performance compared to the state-of-the-art DALL-E-2. Furthermore, It supports interactive text-guided editing on images, offering a versatile and efficient solution for text-to-image generation.. Keywords: Text-to-image generation, Multilingual TTI (mTTI), Neural machine translation (NMT), Cross-lingual NLP, Ensemble Adapter (ENSAD), Hierarchical transformers, Super- resolution, Transformer-based models, Cross-modal general language model (CogLM).
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Xu, Yaoli, Jinjun Zhong, Suzhi Zhang, Chenglin Li, Pu Li, Yanbu Guo, Yuhua Li, Hui Liang y Yazhou Zhang. "A Domain-Oriented Entity Alignment Approach Based on Filtering Multi-Type Graph Neural Networks". Applied Sciences 13, n.º 16 (14 de agosto de 2023): 9237. http://dx.doi.org/10.3390/app13169237.

Texto completo
Resumen
Owing to the heterogeneity and incomplete information present in various domain knowledge graphs, the alignment of distinct source entities that represent an identical real-world entity becomes imperative. Existing methods focus on cross-lingual knowledge graph alignment, and assume that the entities of knowledge graphs in the same language are unique. However, due to the ambiguity of language, heterogeneous knowledge graphs in the same language are often duplicated, and relationship triples are far less than those of cross-lingual knowledge graphs. Moreover, existing methods rarely exclude noisy entities in the process of alignment. These make it impossible for existing methods to deal effectively with the entity alignment of domain knowledge graphs. In order to address these issues, we propose a novel entity alignment approach based on domain-oriented embedded representation (DomainEA). Firstly, a filtering mechanism employs the language model to extract the semantic features of entities and to exclude noisy entities for each entity. Secondly, a Structural Aggregator (SA) incorporates multiple hidden layers to generate high-order neighborhood-aware embeddings of entities that have few relationship connections. An Attribute Aggregator (AA) introduces self-attention to dynamically calculate weights that represent the importance of the attribute values of the entities. Finally, the approach calculates a transformation matrix to map the embeddings of distinct domain knowledge graphs onto a unified space, and matches entities via the joint embeddings of the SA and AA. Compared to six state-of-the-art methods, our experimental results on multiple food datasets show the following: (i) Our approach achieves an average improvement of 6.9% on MRR. (ii) The size of the dataset has a subtle influence on our approach; there is a positive correlation between the expansion of the dataset size and an improvement in most of the metrics. (iii) We can achieve a significant improvement in the level of recall by employing a filtering mechanism that is limited to the top-100 nearest entities as the candidate pairs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Yadav, Siddharth y Tanmoy Chakraborty. "Zera-Shot Sentiment Analysis for Code-Mixed Data". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 18 (18 de mayo de 2021): 15941–42. http://dx.doi.org/10.1609/aaai.v35i18.17967.

Texto completo
Resumen
Code-mixing is the practice of alternating between two or more languages. A major part of sentiment analysis research has been monolingual and they perform poorly on the code-mixed text. We introduce methods that use multilingual and cross-lingual embeddings to transfer knowledge from monolingual text to code-mixed text for code-mixed sentiment analysis. Our methods handle code-mixed text through zero-shot learning and beat state-of-the-art English-Spanish code-mixed sentiment analysis by an absolute 3% F1-score. We are able to achieve 0.58 F1-score (without a parallel corpus) and 0.62 F1-score (with the parallel corpus) on the same benchmark in a zero-shot way as compared to 0.68 F1-score in supervised settings. Our code is publicly available on github.com/sedflix/unsacmt.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Wu, Ji, Madeleine Orr, Kurumi Aizawa y Yuhei Inoue. "Language Relativity in Legacy Literature: A Systematic Review in Multiple Languages". Sustainability 13, n.º 20 (14 de octubre de 2021): 11333. http://dx.doi.org/10.3390/su132011333.

Texto completo
Resumen
Since the Olympic Agenda 2020, legacy has been widely used as a justification for hosting the Olympic Games, through which sustainable development can be achieved for both events and host cities. To date, no universal definition of legacy has been established, which presents challenges for legacy-related international knowledge transfer among host cities. To address this gap, a multilingual systematic review of the literature regarding the concept of legacy was conducted in French, Japanese, Chinese, and English. Using English literature as a baseline, points of convergence and divergence among the languages were identified. While all four languages value the concept of legacy as an important facet of mega-events, significant differences were found within each language. This finding highlights the importance of strategies that align different cultures when promoting sustainable development of some global movements such as the Olympic legacy. Sport management is replete with international topics, such as international events and sport for development, and each topic is studied simultaneously in several languages and with potentially differing frameworks and perspectives. Thus, literature reviews that examine the English literature, exclusively, are innately limited in scope. The development of partnerships and resources that facilitate cross-lingual and cross-cultural consultation and collaboration is an important research agenda. More research is needed on knowledge translation across languages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Guseynova, Innara. "Digital Transformation and Its Consequences for Heritage Languages". Nizhny Novgorod Linguistics University Bulletin, Special issue (31 de diciembre de 2020): 44–58. http://dx.doi.org/10.47388/2072-3490/lunn2020-si-44-58.

Texto completo
Resumen
The article attempts to conduct a primary analysis of the consequences of digital transformation for heritage languages which make up the cultural and historical legacy of individual ethnic communities. In a multilingual society, such a study requires an integrated approach, which takes into account the sociolinguistic parameters of various target audiences, communication channels aimed to disseminate and transfer information, discourse analyses of lin-guistic means, as well as extralinguistic factors impacting the development of different environments. It is equally important to study the specificity of the socio-cultural interaction between communicants in the professional sphere, which primarily indicates the institutional status of participants in communication, as well as their observance / nonobservance of linguistic norms. The latter seems extremely important with regard to heritage languages and their linguistic status in institutional discourse. In many respects, observance / nonobservance of linguistic norms makes it possible, on the one hand, to define the linguistic portrait of the communicant and, on the other hand, to assess the survival of national identity. Both aspects are central across various types of institutional discourse, including political, marketing, ad-vertising discourse etc. The analysis of the institutional aspects of cross-cultural and cross-lingual communication is carried out using an etiological approach that allows to determine the degree of importance of sociolinguistic parameters to achieve adequacy of socio-cultural interaction of representatives of different linguocultures. It is performed indirectly using vari-ous language pairs, in the context of heritage bilingualism, as well as interpersonal interaction. The article also expounds consequences of the global turn towards digital transformation affecting the overall knowledge in liberal arts and human sciences in general and cross-lingual and cross-cultural communication in particular. The study discusses areas of application of heritage language resources such as locus branding, image making, reports of scientific and technical achievements, etc. The article concludes by inferring the need to preserve linguistic diversity and its teleological use in various types of institutional discourse.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Shwartz, Mila, Mark Leikin y David L. Share. "Bi-literate bilingualism versus mono-literate bilingualism". Written Language and Literacy 8, n.º 2 (31 de diciembre de 2005): 103–30. http://dx.doi.org/10.1075/wll.8.2.08shw.

Texto completo
Resumen
The present study compared the early Hebrew (L2) literacy development of three groups; two groups of bilinguals — bi-literate and mono-literate Russian-Hebrew speakers, and a third group of monolingual Hebrew-speakers. We predicted that bi-literacy rather than bilingualism is the key variable as regards L2 literacy learning. In a longitudinal design, a variety of linguistic, meta-linguistic and cognitive tasks were administered at the commencement of first grade, with Hebrew reading and spelling assessed at the end of the year. Results demonstrated that bi-literate bilinguals were far in advance of both mono-literate (Russian-Hebrew) bilinguals and mono-lingual Hebrew-speakers on all reading fluency measures at the end of Grade 1. Bi-literate bilinguals also showed a clear advantage over mono-literate bilingual and mono-lingual peers on all phonological awareness tasks. The mono-literate bilinguals also demonstrated some modest gains over their monolingual peers in Grade 1 reading accuracy. All three groups performed similarly on L2 linguistic tasks. These findings confirm Bialystok’s (2002) assertion that bilingualism per se may not be the most influential factor in L2 reading acquisition. Early (L1) literacy acquisition, however, can greatly enhance L2 literacy development. The present findings also suggest that the actual mechanism of cross-linguistic transfer is the insight gained into the alphabetic principle common to all alphabetic writing systems and not merely the knowledge of a specific letter-sound code such as the Roman orthography.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Imin, Gvzelnur, Mijit Ablimit, Hankiz Yilahun y Askar Hamdulla. "A Character String-Based Stemming for Morphologically Derivative Languages". Information 13, n.º 4 (28 de marzo de 2022): 170. http://dx.doi.org/10.3390/info13040170.

Texto completo
Resumen
Morphologically derivative languages form words by fusing stems and suffixes, stems are important to be extracted in order to make cross lingual alignment and knowledge transfer. As there are phonetic harmony and disharmony when linguistic particles are combined, both phonetic and morphological changes need to be analyzed. This paper proposes a multilingual stemming method that learns morpho-phonetic changes automatically based on character based embedding and sequential modeling. Firstly, the character feature embedding at the sentence level is used as input, and the BiLSTM model is used to obtain the forward and reverse context sequence, and the attention mechanism is added to this model for weight learning, and the global feature information is extracted to capture the stem and affix boundaries; finally CRF model is used to learn more information from sequence features to describe context information more effectively. In order to verify the effectiveness of the above model, the model in this paper is compared with the traditional model on two different data sets of three derivative languages: Uyghur, Kazakh and Kirghiz. The experimental results show that the model in this paper has the best stemming effect on multilingual sentence-level datasets, which leads to more effective stemming. In addition, the proposed model outperforms other traditional models, and fully consider the data characteristics, and has certain advantages with less human intervention.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Kumari, Divya, Asif Ekbal, Rejwanul Haque, Pushpak Bhattacharyya y Andy Way. "Reinforced NMT for Sentiment and Content Preservation in Low-resource Scenario". ACM Transactions on Asian and Low-Resource Language Information Processing 20, n.º 4 (28 de junio de 2021): 1–27. http://dx.doi.org/10.1145/3450970.

Texto completo
Resumen
The preservation of domain knowledge from source to the target is crucial in any translation workflows. Hence, translation service providers that use machine translation (MT) in production could reasonably expect that the translation process should transfer both the underlying pragmatics and the semantics of the source-side sentences into the target language. However, recent studies suggest that the MT systems often fail to preserve such crucial information (e.g., sentiment, emotion, gender traits) embedded in the source text in the target. In this context, the raw automatic translations are often directly fed to other natural language processing (NLP) applications (e.g., sentiment classifier) in a cross-lingual platform. Hence, the loss of such crucial information during the translation could negatively affect the performance of such downstream NLP tasks that heavily rely on the output of the MT systems. In our current research, we carefully balance both the sides (i.e., sentiment and semantics) during translation, by controlling a global-attention-based neural MT (NMT), to generate translations that encode the underlying sentiment of a source sentence while preserving its non-opinionated semantic content. Toward this, we use a state-of-the-art reinforcement learning method, namely, actor-critic , that includes a novel reward combination module, to fine-tune the NMT system so that it learns to generate translations that are best suited for a downstream task, viz. sentiment classification while ensuring the source-side semantics is intact in the process. Experimental results for Hindi–English language pair show that our proposed method significantly improves the performance of the sentiment classifier and alongside results in an improved NMT system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Ebel, S., H. Blättermann, U. Weik, J. Margraf-Stiksrud y R. Deinzer. "High Plaque Levels after Thorough Toothbrushing: What Impedes Efficacy?" JDR Clinical & Translational Research 4, n.º 2 (14 de noviembre de 2018): 135–42. http://dx.doi.org/10.1177/2380084418813310.

Texto completo
Resumen
Objectives: Previous studies have shown high levels of dental plaque after toothbrushing and poor toothbrushing performance. There is a lack of evidence about what oral hygiene behavior predicts persistent plaque. The present cross-sectional study thus relates toothbrushing behavior to oral cleanliness after brushing and to gingivitis. Methods: All young adults from a central town in Germany who turned 18 y old in the year prior to the examination were invited to participate in the study. They were asked to clean their teeth to their best abilities while being filmed. Videos were analyzed regarding brushing movements (vertical, circular, horizontal, modified Bass technique) and evenness of distribution of brushing time across vestibular (labial/buccal) and palatinal (lingual/palatinal) surfaces. Dental status, gingival bleeding, and oral cleanliness after oral hygiene were assessed. Results: Ninety-eight young adults participated in the study. Gingival margins showed persistent plaque at 69.48% ± 12.31% sites (mean ± SD) after participants brushed to their best abilities. Regression analyses with the brushing movements and evenness of distribution of brushing time as predictors explained 15.2% (adjusted R2 = 0.152, P = 0.001) of the variance in marginal plaque and 19.4% (adjusted R2 = 0.194, P < 0.001) of the variance in bleeding. Evenness of distribution of brushing time was the most important behavioral predictor. Conclusion: Even when asked to perform optimal oral hygiene, young German adults distributed their brushing time across surfaces unevenly. Compared with brushing movements, this factor turned out to be of more significance when explaining the variance of plaque and bleeding. Knowledge Transfer Statement: Results of this study can help clinicians and patients understand the meaning of specific behavioral aspects of toothbrushing for oral cleanliness and oral health.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Pikuliak, Matúš. "Cross-Lingual Learning With Distributed Representations". Proceedings of the AAAI Conference on Artificial Intelligence 32, n.º 1 (29 de abril de 2018). http://dx.doi.org/10.1609/aaai.v32i1.11348.

Texto completo
Resumen
Cross-lingual Learning can help to bring state-of-the-art deep learning solutions to smaller languages. These languages in general lack resource for training advanced neural networks. With transfer of knowledge across languages we can improve the results for various NLP tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Sen, Pawan, Rohit Sharma, Lucky Verma y Pari Tenguriya. "Empowering Multilingual AI: Cross-Lingual Transfer Learning". International Journal of Psychosocial Rehabilitation, marzo de 2020, 7915–18. http://dx.doi.org/10.61841/v24i3/400259.

Texto completo
Resumen
Multilingual Natural Language Processing (NLP) and Cross-Lingual Transfer Learning have emerged as pivotal fields in the realm of language technology. This abstract explores the essential concepts and methodologies behind these areas, shedding light on their significance in a world characterized by linguistic diversity. Multilingual NLP enables machines to process and generate text in multiple languages, breaking down communication barriers and fostering global collaboration. Cross-lingual transfer learning, on the other hand, leverages knowledge from one language to enhance NLP tasks in another, facilitating efficient resource utilization and improved model performance. The abstract highlights the growing relevance of these approaches in a multilingual and interconnected world, underscoring their potential to reshape the future of natural language understanding and communication.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Ahmat, Ahtamjan, Yating Yang, Bo Ma, Rui Dong, Kaiwen Lu y Lei Wang. "WAD-X: Improving Zero-shot Cross-lingual Transfer via Adapter-based Word Alignment". ACM Transactions on Asian and Low-Resource Language Information Processing, 19 de julio de 2023. http://dx.doi.org/10.1145/3610289.

Texto completo
Resumen
Multilingual pre-trained language models (mPLMs) have achieved remarkable performance on zero-shot cross-lingual transfer learning. However, most mPLMs implicitly encourage cross-lingual alignment in pre-training stage, making it hard to capture accurate word alignment across languages. In this paper, we propose Word-align ADapters for Cross-lingual transfer (WAD-X) to explicitly align word representations of mPLMs via language-specific subspace. Taking a mPLM as the backbone model, WAD-X constructs subspace for each source-target language pair via adapters. The adapters use statistical alignment as the prior knowledge to guide word-level aligning in the corresponding bilingual semantic subspace. We evaluate our model across a set of target languages on three zero-shot cross-lingual transfer tasks: part-of-speech tagging (POS), dependency parsing (DP), and sentiment analysis (SA). Experimental results demonstrate that our proposed model improves zero-shot cross-lingual transfer on three benchmarks, with improvements of 2.19, 2.50, and 1.61 points in POS, DP, and SA tasks over strong baselines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Wang, Deqing, Junjie Wu, Jingyuan Yang, Baoyu Jing, Wenjie Zhang, Xiaonan He y Hui Zhang. "Cross-Lingual Knowledge Transferring by Structural Correspondence and Space Transfer". IEEE Transactions on Cybernetics, 2021, 1–12. http://dx.doi.org/10.1109/tcyb.2021.3051005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Yu, Chuanming, Haodong Xue, Manyi Wang y Lu An. "Towards an entity relation extraction framework in the cross-lingual context". Electronic Library ahead-of-print, ahead-of-print (3 de agosto de 2021). http://dx.doi.org/10.1108/el-10-2020-0304.

Texto completo
Resumen
Purpose Owing to the uneven distribution of annotated corpus among different languages, it is necessary to bridge the gap between low resource languages and high resource languages. From the perspective of entity relation extraction, this paper aims to extend the knowledge acquisition task from a single language context to a cross-lingual context, and to improve the relation extraction performance for low resource languages. Design/methodology/approach This paper proposes a cross-lingual adversarial relation extraction (CLARE) framework, which decomposes cross-lingual relation extraction into parallel corpus acquisition and adversarial adaptation relation extraction. Based on the proposed framework, this paper conducts extensive experiments in two tasks, i.e. the English-to-Chinese and the English-to-Arabic cross-lingual entity relation extraction. Findings The Macro-F1 values of the optimal models in the two tasks are 0.880 1 and 0.789 9, respectively, indicating that the proposed CLARE framework for CLARE can significantly improve the effect of low resource language entity relation extraction. The experimental results suggest that the proposed framework can effectively transfer the corpus as well as the annotated tags from English to Chinese and Arabic. This study reveals that the proposed approach is less human labour intensive and more effective in the cross-lingual entity relation extraction than the manual method. It shows that this approach has high generalizability among different languages. Originality/value The research results are of great significance for improving the performance of the cross-lingual knowledge acquisition. The cross-lingual transfer may greatly reduce the time and cost of the manual construction of the multi-lingual corpus. It sheds light on the knowledge acquisition and organization from the unstructured text in the era of big data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Novák, Attila y Borbála Novák. "Cross-lingual transfer of knowledge in distributional language models: Experiments in Hungarian". Acta Linguistica Academica, 22 de noviembre de 2022. http://dx.doi.org/10.1556/2062.2022.00580.

Texto completo
Resumen
AbstractIn this paper, we argue that the very convincing performance of recent deep-neural-model-based NLP applications has demonstrated that the distributionalist approach to language description has proven to be more successful than the earlier subtle rule-based models created by the generative school. The now ubiquitous neural models can naturally handle ambiguity and achieve human-like linguistic performance with most of their training consisting only of noisy raw linguistic data without any multimodal grounding or external supervision refuting Chomsky's argument that some generic neural architecture cannot arrive at the linguistic performance exhibited by humans given the limited input available to children. In addition, we demonstrate in experiments with Hungarian as the target language that the shared internal representations in multilingually trained versions of these models make them able to transfer specific linguistic skills, including structured annotation skills, from one language to another remarkably efficiently.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Tikhonova, Maria, Vladislav Mikhailov, Dina Pisarevskaya, Valentin Malykh y Tatiana Shavrina. "Ad astra or astray: Exploring linguistic knowledge of multilingual BERT through NLI task". Natural Language Engineering, 9 de junio de 2022, 1–30. http://dx.doi.org/10.1017/s1351324922000225.

Texto completo
Resumen
Abstract Recent research has reported that standard fine-tuning approaches can be unstable due to being prone to various sources of randomness, including but not limited to weight initialization, training data order, and hardware. Such brittleness can lead to different evaluation results, prediction confidences, and generalization inconsistency of the same models independently fine-tuned under the same experimental setup. Our paper explores this problem in natural language inference, a common task in benchmarking practices, and extends the ongoing research to the multilingual setting. We propose six novel textual entailment and broad-coverage diagnostic datasets for French, German, and Swedish. Our key findings are that the mBERT model demonstrates fine-tuning instability for categories that involve lexical semantics, logic, and predicate-argument structure and struggles to learn monotonicity, negation, numeracy, and symmetry. We also observe that using extra training data only in English can enhance the generalization performance and fine-tuning stability, which we attribute to the cross-lingual transfer capabilities. However, the ratio of particular features in the additional training data might rather hurt the performance for model instances. We are publicly releasing the datasets, hoping to foster the diagnostic investigation of language models (LMs) in a cross-lingual scenario, particularly in terms of benchmarking, which might promote a more holistic understanding of multilingualism in LMs and cross-lingual knowledge transfer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Batsuren, Khuyagbaatar, Gábor Bella y Fausto Giunchiglia. "A large and evolving cognate database". Language Resources and Evaluation, 30 de mayo de 2021. http://dx.doi.org/10.1007/s10579-021-09544-6.

Texto completo
Resumen
AbstractWe present CogNet, a large-scale, automatically-built database of sense-tagged cognates—words of common origin and meaning across languages. CogNet is continuously evolving: its current version contains over 8 million cognate pairs over 338 languages and 35 writing systems, with new releases already in preparation. The paper presents the algorithm and input resources used for its computation, an evaluation of the result, as well as a quantitative analysis of cognate data leading to novel insights on language diversity. Furthermore, as an example on the use of large-scale cross-lingual knowledge bases for improving the quality of multilingual applications, we present a case study on the use of CogNet for bilingual lexicon induction in the framework of cross-lingual transfer learning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Rivera-Zavala, Renzo M. y Paloma Martínez. "Analyzing transfer learning impact in biomedical cross-lingual named entity recognition and normalization". BMC Bioinformatics 22, S1 (diciembre de 2021). http://dx.doi.org/10.1186/s12859-021-04247-9.

Texto completo
Resumen
Abstract Background The volume of biomedical literature and clinical data is growing at an exponential rate. Therefore, efficient access to data described in unstructured biomedical texts is a crucial task for the biomedical industry and research. Named Entity Recognition (NER) is the first step for information and knowledge acquisition when we deal with unstructured texts. Recent NER approaches use contextualized word representations as input for a downstream classification task. However, distributed word vectors (embeddings) are very limited in Spanish and even more for the biomedical domain. Methods In this work, we develop several biomedical Spanish word representations, and we introduce two Deep Learning approaches for pharmaceutical, chemical, and other biomedical entities recognition in Spanish clinical case texts and biomedical texts, one based on a Bi-STM-CRF model and the other on a BERT-based architecture. Results Several Spanish biomedical embeddigns together with the two deep learning models were evaluated on the PharmaCoNER and CORD-19 datasets. The PharmaCoNER dataset is composed of a set of Spanish clinical cases annotated with drugs, chemical compounds and pharmacological substances; our extended Bi-LSTM-CRF model obtains an F-score of 85.24% on entity identification and classification and the BERT model obtains an F-score of 88.80% . For the entity normalization task, the extended Bi-LSTM-CRF model achieves an F-score of 72.85% and the BERT model achieves 79.97%. The CORD-19 dataset consists of scholarly articles written in English annotated with biomedical concepts such as disorder, species, chemical or drugs, gene and protein, enzyme and anatomy. Bi-LSTM-CRF model and BERT model obtain an F-measure of 78.23% and 78.86% on entity identification and classification, respectively on the CORD-19 dataset. Conclusion These results prove that deep learning models with in-domain knowledge learned from large-scale datasets highly improve named entity recognition performance. Moreover, contextualized representations help to understand complexities and ambiguity inherent to biomedical texts. Embeddings based on word, concepts, senses, etc. other than those for English are required to improve NER tasks in other languages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Perez, Naiara, Pablo Accuosto, Àlex Bravo, Montse Cuadros, Eva Martínez-García, Horacio Saggion y German Rigau. "Cross-lingual Semantic Annotation of Biomedical Literature: Experiments in Spanish and English". Bioinformatics, 15 de noviembre de 2019. http://dx.doi.org/10.1093/bioinformatics/btz853.

Texto completo
Resumen
Abstract Motivation Biomedical literature is one of the most relevant sources of information for knowledge mining in the field of Bioinformatics. In spite of English being the most widely addressed language in the field, in recent years there has been a growing interest from the natural language processing community in dealing with languages other than English. However, the availability of language resources and tools for appropriate treatment of non-English texts is lacking behind. Our research is concerned with the semantic annotation of biomedical texts in the Spanish language, which can be considered an under-resourced language where biomedical text processing is concerned. Results We have carried out experiments to assess the effectiveness of several methods for the automatic annotation of biomedical texts in Spanish. One approach is based on the linguistic analysis of Spanish texts and their annotation using an information retrieval and concept disambiguation approach. A second method takes advantage of a Spanish-English machine translation process to annotate English documents and transfer annotations back to Spanish. A third method takes advantage of the combination of both procedures. Our evaluation shows that a combined system has competitive advantages over the two individual procedures. Availability UMLSmapper (https://snlt.vicomtech.org/umlsmapper) and the annotation transfer tool (http://scientmin.taln.upf.edu/anntransfer) are freely available for research purposes as web services and/or demos. Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Hettiarachchi, Hansi, Mariam Adedoyin-Olowe, Jagdev Bhogal y Mohamed Medhat Gaber. "TTL: transformer-based two-phase transfer learning for cross-lingual news event detection". International Journal of Machine Learning and Cybernetics, 8 de marzo de 2023. http://dx.doi.org/10.1007/s13042-023-01795-9.

Texto completo
Resumen
AbstractToday, we have access to a vast data amount, especially on the internet. Online news agencies play a vital role in this data generation, but most of their data is unstructured, requiring an enormous effort to extract important information. Thus, automated intelligent event detection mechanisms are invaluable to the community. In this research, we focus on identifying event details at the sentence and token levels from news articles, considering their fine granularity. Previous research has proposed various approaches ranging from traditional machine learning to deep learning, targeting event detection at these levels. Among these approaches, transformer-based approaches performed best, utilising transformers’ transferability and context awareness, and achieved state-of-the-art results. However, they considered sentence and token level tasks as separate tasks even though their interconnections can be utilised for mutual task improvements. To fill this gap, we propose a novel learning strategy named Two-phase Transfer Learning (TTL) based on transformers, which allows the model to utilise the knowledge from a task at a particular data granularity for another task at different data granularity, and evaluate its performance in sentence and token level event detection. Also, we empirically evaluate how the event detection performance can be improved for different languages (high- and low-resource), involving monolingual and multilingual pre-trained transformers and language-based learning strategies along with the proposed learning strategy. Our findings mainly indicate the effectiveness of multilingual models in low-resource language event detection. Also, TTL can further improve model performance, depending on the involved tasks’ learning order and their relatedness concerning final predictions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Do, Phuc, Trung Phan, Hung Le y Brij B. Gupta. "Building a knowledge graph by using cross-lingual transfer method and distributed MinIE algorithm on apache spark". Neural Computing and Applications, 24 de noviembre de 2020. http://dx.doi.org/10.1007/s00521-020-05495-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía