Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Cross-Lingual knowledge transfer.

Zeitschriftenartikel zum Thema „Cross-Lingual knowledge transfer“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Cross-Lingual knowledge transfer" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Wang, Yabing, Fan Wang, Jianfeng Dong und Hao Luo. „CL2CM: Improving Cross-Lingual Cross-Modal Retrieval via Cross-Lingual Knowledge Transfer“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 6 (24.03.2024): 5651–59. http://dx.doi.org/10.1609/aaai.v38i6.28376.

Der volle Inhalt der Quelle
Annotation:
Cross-lingual cross-modal retrieval has garnered increasing attention recently, which aims to achieve the alignment between vision and target language (V-T) without using any annotated V-T data pairs. Current methods employ machine translation (MT) to construct pseudo-parallel data pairs, which are then used to learn a multi-lingual and multi-modal embedding space that aligns visual and target-language representations. However, the large heterogeneous gap between vision and text, along with the noise present in target language translations, poses significant challenges in effectively aligning their representations. To address these challenges, we propose a general framework, Cross-Lingual to Cross-Modal (CL2CM), which improves the alignment between vision and target language using cross-lingual transfer. This approach allows us to fully leverage the merits of multi-lingual pre-trained models (e.g., mBERT) and the benefits of the same modality structure, i.e., smaller gap, to provide reliable and comprehensive semantic correspondence (knowledge) for the cross-modal network. We evaluate our proposed approach on two multilingual image-text datasets, Multi30K and MSCOCO, and one video-text dataset, VATEX. The results clearly demonstrate the effectiveness of our proposed method and its high potential for large-scale retrieval.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Abhishek Singhal, Happa Khan, Aditya Sharma. „Empowering Multilingual AI: Cross-Lingual Transfer Learning“. Tuijin Jishu/Journal of Propulsion Technology 43, Nr. 4 (26.11.2023): 284–87. http://dx.doi.org/10.52783/tjjpt.v43.i4.2353.

Der volle Inhalt der Quelle
Annotation:
Multilingual Natural Language Processing (NLP) and Cross-Lingual Transfer Learning have emerged as pivotal fields in the realm of language technology. This abstract explores the essential concepts and methodologies behind these areas, shedding light on their significance in a world characterized by linguistic diversity. Multilingual NLP enables machines to process global collaboration. Cross-lingual transfer learning, on the other hand, leverages knowledge from one language to enhance NLP tasks in another, facilitating efficient resource utilization and improved model performance. The abstract highlights the growing relevance of these approaches in a multilingual and interconnected world, underscoring their potential to reshape the future of natural language understanding and communication.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhang, Mozhi, Yoshinari Fujinuma und Jordan Boyd-Graber. „Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 9547–54. http://dx.doi.org/10.1609/aaai.v34i05.6500.

Der volle Inhalt der Quelle
Annotation:
Text classification must sometimes be applied in a low-resource language with no labeled training data. However, training data may be available in a related language. We investigate whether character-level knowledge transfer from a related language helps text classification. We present a cross-lingual document classification framework (caco) that exploits cross-lingual subword similarity by jointly training a character-based embedder and a word-based classifier. The embedder derives vector representations for input words from their written forms, and the classifier makes predictions based on the word vectors. We use a joint character representation for both the source language and the target language, which allows the embedder to generalize knowledge about source language words to target language words with similar forms. We propose a multi-task objective that can further improve the model if additional cross-lingual or monolingual resources are available. Experiments confirm that character-level knowledge transfer is more data-efficient than word-level transfer between related languages.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Colhon, Mihaela. „Language engineering for syntactic knowledge transfer“. Computer Science and Information Systems 9, Nr. 3 (2012): 1231–47. http://dx.doi.org/10.2298/csis120130032c.

Der volle Inhalt der Quelle
Annotation:
In this paper we present a method for an English-Romanian treebank construction, together with the obtained evaluation results. The treebank is built upon a parallel English-Romanian corpus word-aligned and annotated at the morphological and syntactic level. The syntactic trees of the Romanian texts are generated by considering the syntactic phrases of the English parallel texts automatically resulted from syntactic parsing. The method reuses and adjusts existing tools and algorithms for cross-lingual transfer of syntactic constituents and syntactic trees alignment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zhan, Qingran, Xiang Xie, Chenguang Hu, Juan Zuluaga-Gomez, Jing Wang und Haobo Cheng. „Domain-Adversarial Based Model with Phonological Knowledge for Cross-Lingual Speech Recognition“. Electronics 10, Nr. 24 (20.12.2021): 3172. http://dx.doi.org/10.3390/electronics10243172.

Der volle Inhalt der Quelle
Annotation:
Phonological-based features (articulatory features, AFs) describe the movements of the vocal organ which are shared across languages. This paper investigates a domain-adversarial neural network (DANN) to extract reliable AFs, and different multi-stream techniques are used for cross-lingual speech recognition. First, a novel universal phonological attributes definition is proposed for Mandarin, English, German and French. Then a DANN-based AFs detector is trained using source languages (English, German and French). When doing the cross-lingual speech recognition, the AFs detectors are used to transfer the phonological knowledge from source languages (English, German and French) to the target language (Mandarin). Two multi-stream approaches are introduced to fuse the acoustic features and cross-lingual AFs. In addition, the monolingual AFs system (i.e., the AFs are directly extracted from the target language) is also investigated. Experiments show that the performance of the AFs detector can be improved by using convolutional neural networks (CNN) with a domain-adversarial learning method. The multi-head attention (MHA) based multi-stream can reach the best performance compared to the baseline, cross-lingual adaptation approach, and other approaches. More specifically, the MHA-mode with cross-lingual AFs yields significant improvements over monolingual AFs with the restriction of training data size and, which can be easily extended to other low-resource languages.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Xu, Zenan, Linjun Shou, Jian Pei, Ming Gong, Qinliang Su, Xiaojun Quan und Daxin Jiang. „A Graph Fusion Approach for Cross-Lingual Machine Reading Comprehension“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 11 (26.06.2023): 13861–68. http://dx.doi.org/10.1609/aaai.v37i11.26623.

Der volle Inhalt der Quelle
Annotation:
Although great progress has been made for Machine Reading Comprehension (MRC) in English, scaling out to a large number of languages remains a huge challenge due to the lack of large amounts of annotated training data in non-English languages. To address this challenge, some recent efforts of cross-lingual MRC employ machine translation to transfer knowledge from English to other languages, through either explicit alignment or implicit attention. For effective knowledge transition, it is beneficial to leverage both semantic and syntactic information. However, the existing methods fail to explicitly incorporate syntax information in model learning. Consequently, the models are not robust to errors in alignment and noises in attention. In this work, we propose a novel approach, which jointly models the cross-lingual alignment information and the mono-lingual syntax information using a graph. We develop a series of algorithms, including graph construction, learning, and pre-training. The experiments on two benchmark datasets for cross-lingual MRC show that our approach outperforms all strong baselines, which verifies the effectiveness of syntax information for cross-lingual MRC.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Rijhwani, Shruti, Jiateng Xie, Graham Neubig und Jaime Carbonell. „Zero-Shot Neural Transfer for Cross-Lingual Entity Linking“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 6924–31. http://dx.doi.org/10.1609/aaai.v33i01.33016924.

Der volle Inhalt der Quelle
Annotation:
Cross-lingual entity linking maps an entity mention in a source language to its corresponding entry in a structured knowledge base that is in a different (target) language. While previous work relies heavily on bilingual lexical resources to bridge the gap between the source and the target languages, these resources are scarce or unavailable for many low-resource languages. To address this problem, we investigate zero-shot cross-lingual entity linking, in which we assume no bilingual lexical resources are available in the source low-resource language. Specifically, we propose pivot-basedentity linking, which leverages information from a highresource “pivot” language to train character-level neural entity linking models that are transferred to the source lowresource language in a zero-shot manner. With experiments on 9 low-resource languages and transfer through a total of54 languages, we show that our proposed pivot-based framework improves entity linking accuracy 17% (absolute) on average over the baseline systems, for the zero-shot scenario.1 Further, we also investigate the use of language-universal phonological representations which improves average accuracy (absolute) by 36% when transferring between languages that use different scripts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Bari, M. Saiful, Shafiq Joty und Prathyusha Jwalapuram. „Zero-Resource Cross-Lingual Named Entity Recognition“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 7415–23. http://dx.doi.org/10.1609/aaai.v34i05.6237.

Der volle Inhalt der Quelle
Annotation:
Recently, neural methods have achieved state-of-the-art (SOTA) results in Named Entity Recognition (NER) tasks for many languages without the need for manually crafted features. However, these models still require manually annotated training data, which is not available for many languages. In this paper, we propose an unsupervised cross-lingual NER model that can transfer NER knowledge from one language to another in a completely unsupervised way without relying on any bilingual dictionary or parallel data. Our model achieves this through word-level adversarial learning and augmented fine-tuning with parameter sharing and feature augmentation. Experiments on five different languages demonstrate the effectiveness of our approach, outperforming existing models by a good margin and setting a new SOTA for each language pair.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Qi, Kunxun, und Jianfeng Du. „Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 8632–39. http://dx.doi.org/10.1609/aaai.v34i05.6387.

Der volle Inhalt der Quelle
Annotation:
Cross-lingual natural language inference is a fundamental task in cross-lingual natural language understanding, widely addressed by neural models recently. Existing neural model based methods either align sentence embeddings between source and target languages, heavily relying on annotated parallel corpora, or exploit pre-trained cross-lingual language models that are fine-tuned on a single language and hard to transfer knowledge to another language. To resolve these limitations in existing methods, this paper proposes an adversarial training framework to enhance both pre-trained models and classical neural models for cross-lingual natural language inference. It trains on the union of data in the source language and data in the target language, learning language-invariant features to improve the inference performance. Experimental results on the XNLI benchmark demonstrate that three popular neural models enhanced by the proposed framework significantly outperform the original models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhang, Weizhao, und Hongwu Yang. „Meta-Learning for Mandarin-Tibetan Cross-Lingual Speech Synthesis“. Applied Sciences 12, Nr. 23 (28.11.2022): 12185. http://dx.doi.org/10.3390/app122312185.

Der volle Inhalt der Quelle
Annotation:
The paper proposes a meta-learning-based Mandarin-Tibetan cross-lingual text-to-speech (TTS) to realize both Mandarin and Tibetan speech synthesis under a unique framework. First, we build two kinds of Tacotron2-based Mandarin-Tibetan cross-lingual baseline TTS. One is a shared encoder Mandarin-Tibetan cross-lingual TTS, and another is a separate encoder Mandarin-Tibetan cross-lingual TTS. Both baseline TTS use the speaker classifier with a gradient reversal layer to disentangle speaker-specific information from the text encoder. At the same time, we design a prosody generator to extract prosodic information from sentences to explore syntactic and semantic information adequately. To further improve the synthesized speech quality of the Tacotron2-based Mandarin-Tibetan cross-lingual TTS, we propose a meta-learning-based Mandarin-Tibetan cross-lingual TTS. Based on the separate encoder Mandarin-Tibetan cross-lingual TTS, we use an additional dynamic network to predict the parameters of the language-dependent text encoder that could realize better cross-lingual knowledge sharing in the sequence-to-sequence TTS. Lastly, we synthesize Mandarin or Tibetan speech through the unique acoustic model. The baseline experimental results show that the separate encoder Mandarin-Tibetan cross-lingual TTS could handle the input of different languages better than the shared encoder Mandarin-Tibetan cross-lingual TTS. The experimental results further show that the proposed meta-learning-based Mandarin-Tibetan cross-lingual speech synthesis method could effectively improve the voice quality of synthesized speech in terms of naturalness and speaker similarity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Chen, Xilun, Yu Sun, Ben Athiwaratkun, Claire Cardie und Kilian Weinberger. „Adversarial Deep Averaging Networks for Cross-Lingual Sentiment Classification“. Transactions of the Association for Computational Linguistics 6 (Dezember 2018): 557–70. http://dx.doi.org/10.1162/tacl_a_00039.

Der volle Inhalt der Quelle
Annotation:
In recent years great success has been achieved in sentiment classification for English, thanks in part to the availability of copious annotated resources. Unfortunately, most languages do not enjoy such an abundance of labeled data. To tackle the sentiment classification problem in low-resource languages without adequate annotated data, we propose an Adversarial Deep Averaging Network (ADAN 1 ) to transfer the knowledge learned from labeled data on a resource-rich source language to low-resource languages where only unlabeled data exist. ADAN has two discriminative branches: a sentiment classifier and an adversarial language discriminator. Both branches take input from a shared feature extractor to learn hidden representations that are simultaneously indicative for the classification task and invariant across languages. Experiments on Chinese and Arabic sentiment classification demonstrate that ADAN significantly outperforms state-of-the-art systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Wang, Mengqiu, und Christopher D. Manning. „Cross-lingual Projected Expectation Regularization for Weakly Supervised Learning“. Transactions of the Association for Computational Linguistics 2 (Dezember 2014): 55–66. http://dx.doi.org/10.1162/tacl_a_00165.

Der volle Inhalt der Quelle
Annotation:
We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F1 scores of 64% and 60% when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Fang, Yuwei, Shuohang Wang, Zhe Gan, Siqi Sun und Jingjing Liu. „FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 14 (18.05.2021): 12776–84. http://dx.doi.org/10.1609/aaai.v35i14.17512.

Der volle Inhalt der Quelle
Annotation:
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic cross-lingual alignment between different languages that proves essential for multilingual tasks. In this paper, we propose FILTER, an enhanced fusion method that takes cross-lingual data as input for XLM finetuning. Specifically, FILTER first encodes text input in the source language and its translation in the target language independently in the shallow layers, then performs cross-language fusion to extract multilingual knowledge in the intermediate layers, and finally performs further language-specific encoding. During inference, the model makes predictions based on the text input in the target language and its translation in the source language. For simple tasks such as classification, translated text in the target language shares the same label as the source language. However, this shared label becomes less accurate or even unavailable for more complex tasks such as question answering, NER and POS tagging. To tackle this issue, we further propose an additional KL-divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language. Extensive experiments demonstrate that FILTER achieves new state of the art on two challenging multilingual multi-task benchmarks, XTREME and XGLUE.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Ge, Ling, Chunming Hu, Guanghui Ma, Jihong Liu und Hong Zhang. „Discrepancy and Uncertainty Aware Denoising Knowledge Distillation for Zero-Shot Cross-Lingual Named Entity Recognition“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 16 (24.03.2024): 18056–64. http://dx.doi.org/10.1609/aaai.v38i16.29762.

Der volle Inhalt der Quelle
Annotation:
The knowledge distillation-based approaches have recently yielded state-of-the-art (SOTA) results for cross-lingual NER tasks in zero-shot scenarios. These approaches typically employ a teacher network trained with the labelled source (rich-resource) language to infer pseudo-soft labels for the unlabelled target (zero-shot) language, and force a student network to approximate these pseudo labels to achieve knowledge transfer. However, previous works have rarely discussed the issue of pseudo-label noise caused by the source-target language gap, which can mislead the training of the student network and result in negative knowledge transfer. This paper proposes an discrepancy and uncertainty aware Denoising Knowledge Distillation model (DenKD) to tackle this issue. Specifically, DenKD uses a discrepancy-aware denoising representation learning method to optimize the class representations of the target language produced by the teacher network, thus enhancing the quality of pseudo labels and reducing noisy predictions. Further, DenKD employs an uncertainty-aware denoising method to quantify the pseudo-label noise and adjust the focus of the student network on different samples during knowledge distillation, thereby mitigating the noise's adverse effects. We conduct extensive experiments on 28 languages including 4 languages not covered by the pre-trained models, and the results demonstrate the effectiveness of our DenKD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Wu, Tianxing, Chaoyu Gao, Lin Li und Yuxiang Wang. „Leveraging Multi-Modal Information for Cross-Lingual Entity Matching across Knowledge Graphs“. Applied Sciences 12, Nr. 19 (08.10.2022): 10107. http://dx.doi.org/10.3390/app121910107.

Der volle Inhalt der Quelle
Annotation:
In recent years, the scale of knowledge graphs and the number of entities have grown rapidly. Entity matching across different knowledge graphs has become an urgent problem to be solved for knowledge fusion. With the importance of entity matching being increasingly evident, the use of representation learning technologies to find matched entities has attracted extensive attention due to the computability of vector representations. However, existing studies on representation learning technologies cannot make full use of knowledge graph relevant multi-modal information. In this paper, we propose a new cross-lingual entity matching method (called CLEM) with knowledge graph representation learning on rich multi-modal information. The core is the multi-view intact space learning method to integrate embeddings of multi-modal information for matching entities. Experimental results on cross-lingual datasets show the superiority and competitiveness of our proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Zhang, Weitai, Lirong Dai, Junhua Liu und Shijin Wang. „Improving Many-to-Many Neural Machine Translation via Selective and Aligned Online Data Augmentation“. Applied Sciences 13, Nr. 6 (20.03.2023): 3946. http://dx.doi.org/10.3390/app13063946.

Der volle Inhalt der Quelle
Annotation:
Multilingual neural machine translation (MNMT) models are theoretically attractive for low- and zero-resource language pairs with the impact of cross-lingual knowledge transfer. Existing approaches mainly focus on English-centric directions and always underperform compared to their pivot-based counterparts for non-English directions. In this work, we aim to build a many-to-many MNMT system with an emphasis on the quality of non-English directions by exploring selective and aligned online data augmentation algorithms. Based on our findings showing that the augmented synthetic samples are not “the more, the better” we propose selective online back-translation (SOBT) and thoroughly study different selection criteria to pick suitable samples for training. Furthermore, we boost SOBT with cross-lingual online substitution (CLOS) to align token representations and encourage transfer learning. Our intuition is based on the hypothesis that a universal cross-lingual representation leads to a better multilingual translation performance, especially for non-English directions. Comparing to previous state-of-the-art many-to-many MNMT models and conventional pivot-based methods, experiments on IWSLT2014 and OPUS-100 translation benchmarks show that our approach achieves a competitive or even better performance on English-centric directions and achieves up to ∼12 BLEU for non-English directions. All of our models and codes are publicly available.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Cui, Ruixiang, Rahul Aralikatte, Heather Lent und Daniel Hershcovich. „Compositional Generalization in Multilingual Semantic Parsing over Wikidata“. Transactions of the Association for Computational Linguistics 10 (2022): 937–55. http://dx.doi.org/10.1162/tacl_a_00499.

Der volle Inhalt der Quelle
Annotation:
Abstract Semantic parsing (SP) allows humans to leverage vast knowledge resources through natural interaction. However, parsers are mostly designed for and evaluated on English resources, such as CFQ (Keysers et al., 2020), the current standard benchmark based on English data generated from grammar rules and oriented towards Freebase, an outdated knowledge base. We propose a method for creating a multilingual, parallel dataset of question-query pairs, grounded in Wikidata. We introduce such a dataset, which we call Multilingual Compositional Wikidata Questions (MCWQ), and use it to analyze the compositional generalization of semantic parsers in Hebrew, Kannada, Chinese, and English. While within- language generalization is comparable across languages, experiments on zero-shot cross- lingual transfer demonstrate that cross-lingual compositional generalization fails, even with state-of-the-art pretrained multilingual encoders. Furthermore, our methodology, dataset, and results will facilitate future research on SP in more realistic and diverse settings than has been possible with existing resources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

He, Keqing, Weiran Xu und Yuanmeng Yan. „Multi-Level Cross-Lingual Transfer Learning With Language Shared and Specific Knowledge for Spoken Language Understanding“. IEEE Access 8 (2020): 29407–16. http://dx.doi.org/10.1109/access.2020.2972925.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Zhou, Shuyan, Shruti Rijhwani, John Wieting, Jaime Carbonell und Graham Neubig. „Improving Candidate Generation for Low-resource Cross-lingual Entity Linking“. Transactions of the Association for Computational Linguistics 8 (Juli 2020): 109–24. http://dx.doi.org/10.1162/tacl_a_00303.

Der volle Inhalt der Quelle
Annotation:
Cross-lingual entity linking (XEL) is the task of finding referents in a target-language knowledge base (KB) for mentions extracted from source-language texts. The first step of (X)EL is candidate generation, which retrieves a list of plausible candidate entities from the target-language KB for each mention. Approaches based on resources from Wikipedia have proven successful in the realm of relatively high-resource languages, but these do not extend well to low-resource languages with few, if any, Wikipedia pages. Recently, transfer learning methods have been shown to reduce the demand for resources in the low-resource languages by utilizing resources in closely related languages, but the performance still lags far behind their high-resource counterparts. In this paper, we first assess the problems faced by current entity candidate generation methods for low-resource XEL, then propose three improvements that (1) reduce the disconnect between entity mentions and KB entries, and (2) improve the robustness of the model to low-resource scenarios. The methods are simple, but effective: We experiment with our approach on seven XEL datasets and find that they yield an average gain of 16.9% in Top-30 gold candidate recall, compared with state-of-the-art baselines. Our improved model also yields an average gain of 7.9% in in-KB accuracy of end-to-end XEL. 1
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Ge, Ling, Chunming Hu, Guanghui Ma, Hong Zhang und Jihong Liu. „ProKD: An Unsupervised Prototypical Knowledge Distillation Network for Zero-Resource Cross-Lingual Named Entity Recognition“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 11 (26.06.2023): 12818–26. http://dx.doi.org/10.1609/aaai.v37i11.26507.

Der volle Inhalt der Quelle
Annotation:
For named entity recognition (NER) in zero-resource languages, utilizing knowledge distillation methods to transfer language-independent knowledge from the rich-resource source languages to zero-resource languages is an effective means. Typically, these approaches adopt a teacher-student architecture, where the teacher network is trained in the source language, and the student network seeks to learn knowledge from the teacher network and is expected to perform well in the target language. Despite the impressive performance achieved by these methods, we argue that they have two limitations. Firstly, the teacher network fails to effectively learn language-independent knowledge shared across languages due to the differences in the feature distribution between the source and target languages. Secondly, the student network acquires all of its knowledge from the teacher network and ignores the learning of target language-specific knowledge. Undesirably, these limitations would hinder the model's performance in the target language. This paper proposes an unsupervised prototype knowledge distillation network (ProKD) to address these issues. Specifically, ProKD presents a contrastive learning-based prototype alignment method to achieve class feature alignment by adjusting the prototypes' distance from the source and target languages, boosting the teacher network's capacity to acquire language-independent knowledge. In addition, ProKD introduces a prototype self-training method to learn the intrinsic structure of the language by retraining the student network on the target data using samples' distance information from prototypes, thereby enhancing the student network's ability to acquire language-specific knowledge. Extensive experiments on three benchmark cross-lingual NER datasets demonstrate the effectiveness of our approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Du, Yangkai, Tengfei Ma, Lingfei Wu, Xuhong Zhang und Shouling Ji. „AdaCCD: Adaptive Semantic Contrasts Discovery Based Cross Lingual Adaptation for Code Clone Detection“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 16 (24.03.2024): 17942–50. http://dx.doi.org/10.1609/aaai.v38i16.29749.

Der volle Inhalt der Quelle
Annotation:
Code Clone Detection, which aims to retrieve functionally similar programs from large code bases, has been attracting increasing attention. Modern software often involves a diverse range of programming languages. However, current code clone detection methods are generally limited to only a few popular programming languages due to insufficient annotated data as well as their own model design constraints. To address these issues, we present AdaCCD, a novel cross-lingual adaptation method that can detect cloned codes in a new language without annotations in that language. AdaCCD leverages language-agnostic code representations from pre-trained programming language models and propose an Adaptively Refined Contrastive Learning framework to transfer knowledge from resource-rich languages to resource-poor languages. We evaluate the cross-lingual adaptation results of AdaCCD by constructing a multilingual code clone detection benchmark consisting of 5 programming languages. AdaCCD achieves significant improvements over other baselines, and achieve comparable performance to supervised fine-tuning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Ge, Ling, Chunming Hu, Guanghui Ma, Jihong Liu und Hong Zhang. „DA-Net: A Disentangled and Adaptive Network for Multi-Source Cross-Lingual Transfer Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 16 (24.03.2024): 18047–55. http://dx.doi.org/10.1609/aaai.v38i16.29761.

Der volle Inhalt der Quelle
Annotation:
Multi-Source cross-lingual transfer learning deals with the transfer of task knowledge from multiple labelled source languages to an unlabeled target language under the language shift. Existing methods typically focus on weighting the predictions produced by language-specific classifiers of different sources that follow a shared encoder. However, all source languages share the same encoder, which is updated by all these languages. The extracted representations inevitably contain different source languages' information, which may disturb the learning of the language-specific classifiers. Additionally, due to the language gap, language-specific classifiers trained with source labels are unable to make accurate predictions for the target language. Both facts impair the model's performance. To address these challenges, we propose a Disentangled and Adaptive Network ~(DA-Net). Firstly, we devise a feedback-guided collaborative disentanglement method that seeks to purify input representations of classifiers, thereby mitigating mutual interference from multiple sources. Secondly, we propose a class-aware parallel adaptation method that aligns class-level distributions for each source-target language pair, thereby alleviating the language pairs' language gap. Experimental results on three different tasks involving 38 languages validate the effectiveness of our approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Wu, Qianhui, Zijia Lin, Guoxin Wang, Hui Chen, Börje F. Karlsson, Biqing Huang und Chin-Yew Lin. „Enhanced Meta-Learning for Cross-Lingual Named Entity Recognition with Minimal Resources“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 9274–81. http://dx.doi.org/10.1609/aaai.v34i05.6466.

Der volle Inhalt der Quelle
Annotation:
For languages with no annotated resources, transferring knowledge from rich-resource languages is an effective solution for named entity recognition (NER). While all existing methods directly transfer from source-learned model to a target language, in this paper, we propose to fine-tune the learned model with a few similar examples given a test case, which could benefit the prediction by leveraging the structural and semantic information conveyed in such similar examples. To this end, we present a meta-learning algorithm to find a good model parameter initialization that could fast adapt to the given test case and propose to construct multiple pseudo-NER tasks for meta-training by computing sentence similarities. To further improve the model's generalization ability across different languages, we introduce a masking scheme and augment the loss function with an additional maximum term during meta-training. We conduct extensive experiments on cross-lingual named entity recognition with minimal resources over five target languages. The results show that our approach significantly outperforms existing state-of-the-art methods across the board.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Pasini, Tommaso, Alessandro Raganato und Roberto Navigli. „XL-WSD: An Extra-Large and Cross-Lingual Evaluation Framework for Word Sense Disambiguation“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 15 (18.05.2021): 13648–56. http://dx.doi.org/10.1609/aaai.v35i15.17609.

Der volle Inhalt der Quelle
Annotation:
Transformer-based architectures brought a breeze of change to Word Sense Disambiguation (WSD), improving models' performances by a large margin. The fast development of new approaches has been further encouraged by a well-framed evaluation suite for English, which has allowed their performances to be kept track of and compared fairly. However, other languages have remained largely unexplored, as testing data are available for a few languages only and the evaluation setting is rather matted. In this paper, we untangle this situation by proposing XL-WSD, a cross-lingual evaluation benchmark for the WSD task featuring sense-annotated development and test sets in 18 languages from six different linguistic families, together with language-specific silver training data. We leverage XL-WSD datasets to conduct an extensive evaluation of neural and knowledge-based approaches, including the most recent multilingual language models. Results show that the zero-shot knowledge transfer across languages is a promising research direction within the WSD field, especially when considering low-resourced languages where large pre-trained multilingual models still perform poorly. We make the evaluation suite and the code for performing the experiments available at https://sapienzanlp.github.io/xl-wsd/.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Chen, Muzi. „Analysis on Transfer Learning Models and Applications in Natural Language Processing“. Highlights in Science, Engineering and Technology 16 (10.11.2022): 446–52. http://dx.doi.org/10.54097/hset.v16i.2609.

Der volle Inhalt der Quelle
Annotation:
Assumptions have been established that many machine learning algorithms expect the training data and the testing data to share the same feature space or distribution. Thus, transfer learning (TL) rises due to the tolerance of the different feature spaces and the distribution of data. It is an optimization to improve performance from task to task. This paper includes the basic knowledge of transfer learning and summarizes some relevant experimental results of popular applications using transfer learning in the natural language processing (NLP) field. The mathematical definition of TL is briefly mentioned. After that, basic knowledge including the different categories of TL, and the comparison between TL and traditional machine learning models is introduced. Then, some applications which mainly focus on question answering, cyberbullying detection, and sentiment analysis will be presented. Other applications will also be briefly introduced such as Named Entity Recognition (NER), Intent Classification, and Cross-Lingual Learning, etc. For each application, this study provides reference on transfer learning models for related researches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Ebrahimi, Mohammadreza, Yidong Chai, Sagar Samtani und Hsinchun Chen. „Cross-Lingual Cybersecurity Analytics in the International Dark Web with Adversarial Deep Representation Learning“. MIS Quarterly 46, Nr. 2 (19.05.2022): 1209–26. http://dx.doi.org/10.25300/misq/2022/16618.

Der volle Inhalt der Quelle
Annotation:
International dark web platforms operating within multiple geopolitical regions and languages host a myriad of hacker assets such as malware, hacking tools, hacking tutorials, and malicious source code. Cybersecurity analytics organizations employ machine learning models trained on human-labeled data to automatically detect these assets and bolster their situational awareness. However, the lack of human-labeled training data is prohibitive when analyzing foreign-language dark web content. In this research note, we adopt the computational design science paradigm to develop a novel IT artifact for cross-lingual hacker asset detection (CLHAD). CLHAD automatically leverages the knowledge learned from English content to detect hacker assets in non-English dark web platforms. CLHAD encompasses a novel Adversarial deep representation learning (ADREL) method, which generates multilingual text representations using generative adversarial networks (GANs). Drawing upon the state of the art in cross-lingual knowledge transfer, ADREL is a novel approach to automatically extract transferable text representations and facilitate the analysis of multilingual content. We evaluate CLHAD on Russian, French, and Italian dark web platforms and demonstrate its practical utility in hacker asset profiling, and conduct a proof-of-concept case study. Our analysis suggests that cybersecurity managers may benefit more from focusing on Russian to identify sophisticated hacking assets. In contrast, financial hacker assets are scattered among several dominant dark web languages. Managerial insights for security managers are discussed at operational and strategic levels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Chen, Nuo, Linjun Shou, Ming Gong und Jian Pei. „From Good to Best: Two-Stage Training for Cross-Lingual Machine Reading Comprehension“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 10 (28.06.2022): 10501–8. http://dx.doi.org/10.1609/aaai.v36i10.21293.

Der volle Inhalt der Quelle
Annotation:
Cross-lingual Machine Reading Comprehension (xMRC) is a challenging task due to the lack of training data in low-resource languages. Recent approaches use training data only in a resource-rich language (such as English) to fine-tune large-scale cross-lingual pre-trained language models, which transfer knowledge from resource-rich languages (source) to low-resource languages (target). Due to the big difference between languages, the model fine-tuned only by the source language may not perform well for target languages. In our study, we make an interesting observation that while the top 1 result predicted by the previous approaches may often fail to hit the ground-truth answer, there are still good chances for the correct answer to be contained in the set of top k predicted results. Intuitively, the previous approaches have empowered the model certain level of capability to roughly distinguish good answers from bad ones. However, without sufficient training data, it is not powerful enough to capture the nuances between the accurate answer and those approximate ones. Based on this observation, we develop a two-stage approach to enhance the model performance. The first stage targets at recall; we design a hard-learning (HL) algorithm to maximize the likelihood that the top k predictions contain the accurate answer. The second stage focuses on precision, where an answer-aware contrastive learning (AA-CL) mechanism is developed to learn the minute difference between the accurate answer and other candidates. Extensive experiments show that our model significantly outperforms strong baselines on two cross-lingual MRC benchmark datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Jiang, Aiqi, und Arkaitz Zubiaga. „SexWEs: Domain-Aware Word Embeddings via Cross-Lingual Semantic Specialisation for Chinese Sexism Detection in Social Media“. Proceedings of the International AAAI Conference on Web and Social Media 17 (02.06.2023): 447–58. http://dx.doi.org/10.1609/icwsm.v17i1.22159.

Der volle Inhalt der Quelle
Annotation:
The goal of sexism detection is to mitigate negative online content targeting certain gender groups of people. However, the limited availability of labeled sexism-related datasets makes it problematic to identify online sexism for low-resource languages. In this paper, we address the task of automatic sexism detection in social media for one low-resource language -- Chinese. Rather than collecting new sexism data or building cross-lingual transfer learning models, we develop a cross-lingual domain-aware semantic specialisation system in order to make the most of existing data. Semantic specialisation is a technique for retrofitting pre-trained distributional word vectors by integrating external linguistic knowledge (such as lexico-semantic relations) into the specialised feature space. To do this, we leverage semantic resources for sexism from a high-resource language (English) to specialise pre-trained word vectors in the target language (Chinese) to inject domain knowledge. We demonstrate the benefit of our sexist word embeddings (SexWEs) specialised by our framework via intrinsic evaluation of word similarity and extrinsic evaluation of sexism detection. Compared with other specialisation approaches and Chinese baseline word vectors, our SexWEs shows an average score improvement of 0.033 and 0.064 in both intrinsic and extrinsic evaluations, respectively. The ablative results and visualisation of SexWEs also prove the effectiveness of our framework on retrofitting word vectors in low-resource languages.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Pajoohesh, Parto. „A Probe into Lexical Depth: What is the Direction of Transfer for L1 Literacy and L2 Development?“ Heritage Language Journal 5, Nr. 1 (30.06.2007): 117–46. http://dx.doi.org/10.46538/hlj.5.1.6.

Der volle Inhalt der Quelle
Annotation:
This paper examines the intersection between heritage language (HL) learning and the development of English and Farsi deep lexical knowledge. The study compares two groups of Farsi-English bilingual children with different HL educational backgrounds and a group of English-only children by testing their paradigmatic-syntagmatic knowledge of words. A statistical analysis of the children's deep lexical knowledge was conducted in light of their HL literacy experience, second language (English) schooling, and length of residence. The findings revealed that longer length of residence and L2 schooling correlates with better performance on the L2 measures of lexical depth, whereas longer residence in the home country and first language (L1) formal schooling do not correlate with better performance on the Farsi measures. The study concluded that, in the long term, the learning of a heritage language, in combination with L2 academic instruction is more effective to the cross-lingual transfer of academic skills.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

S, Tarun. „Bridging Languages through Images: A Multilingual Text-to-Image Synthesis Approach“. INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, Nr. 05 (11.05.2024): 1–5. http://dx.doi.org/10.55041/ijsrem33773.

Der volle Inhalt der Quelle
Annotation:
This research investigates the challenges posed by the predominant focus on English language text-to-image generation (TTI) because of the lack of annotated image caption data in other languages. The resulting inequitable access to TTI technology in non-English-speaking regions motivates the research of multilingual TTI (mTTI) and the potential of neural machine translation (NMT) to facilitate its development. The study presents two main contributions. Firstly, a systematic empirical study employing a multilingual multi-modal encoder evaluates standard cross-lingual NLP methods applied to mTTI, including TRANSLATE TRAIN, TRANSLATE TEST, and ZERO-SHOT TRANSFER. Secondly, a novel parameter-efficient approach called Ensemble Adapter (ENSAD) is introduced, leveraging multilingual text knowledge within the mTTI framework to avoid the language gap and enhance mTTI performance. Additionally, the research addresses challenges associated with transformer-based TTI models, such as slow generation and complexity for high-resolution images. It proposes hierarchical transformers and local parallel autoregressive generation techniques to overcome these limitations. A 6B-parameter transformer pretrained with a cross-modal general language model (CogLM) and fine-tuned for fast super-resolution results in a new text-to-image system, denoted as It, which demonstrates competitive performance compared to the state-of-the-art DALL-E-2. Furthermore, It supports interactive text-guided editing on images, offering a versatile and efficient solution for text-to-image generation.. Keywords: Text-to-image generation, Multilingual TTI (mTTI), Neural machine translation (NMT), Cross-lingual NLP, Ensemble Adapter (ENSAD), Hierarchical transformers, Super- resolution, Transformer-based models, Cross-modal general language model (CogLM).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Xu, Yaoli, Jinjun Zhong, Suzhi Zhang, Chenglin Li, Pu Li, Yanbu Guo, Yuhua Li, Hui Liang und Yazhou Zhang. „A Domain-Oriented Entity Alignment Approach Based on Filtering Multi-Type Graph Neural Networks“. Applied Sciences 13, Nr. 16 (14.08.2023): 9237. http://dx.doi.org/10.3390/app13169237.

Der volle Inhalt der Quelle
Annotation:
Owing to the heterogeneity and incomplete information present in various domain knowledge graphs, the alignment of distinct source entities that represent an identical real-world entity becomes imperative. Existing methods focus on cross-lingual knowledge graph alignment, and assume that the entities of knowledge graphs in the same language are unique. However, due to the ambiguity of language, heterogeneous knowledge graphs in the same language are often duplicated, and relationship triples are far less than those of cross-lingual knowledge graphs. Moreover, existing methods rarely exclude noisy entities in the process of alignment. These make it impossible for existing methods to deal effectively with the entity alignment of domain knowledge graphs. In order to address these issues, we propose a novel entity alignment approach based on domain-oriented embedded representation (DomainEA). Firstly, a filtering mechanism employs the language model to extract the semantic features of entities and to exclude noisy entities for each entity. Secondly, a Structural Aggregator (SA) incorporates multiple hidden layers to generate high-order neighborhood-aware embeddings of entities that have few relationship connections. An Attribute Aggregator (AA) introduces self-attention to dynamically calculate weights that represent the importance of the attribute values of the entities. Finally, the approach calculates a transformation matrix to map the embeddings of distinct domain knowledge graphs onto a unified space, and matches entities via the joint embeddings of the SA and AA. Compared to six state-of-the-art methods, our experimental results on multiple food datasets show the following: (i) Our approach achieves an average improvement of 6.9% on MRR. (ii) The size of the dataset has a subtle influence on our approach; there is a positive correlation between the expansion of the dataset size and an improvement in most of the metrics. (iii) We can achieve a significant improvement in the level of recall by employing a filtering mechanism that is limited to the top-100 nearest entities as the candidate pairs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Yadav, Siddharth, und Tanmoy Chakraborty. „Zera-Shot Sentiment Analysis for Code-Mixed Data“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 18 (18.05.2021): 15941–42. http://dx.doi.org/10.1609/aaai.v35i18.17967.

Der volle Inhalt der Quelle
Annotation:
Code-mixing is the practice of alternating between two or more languages. A major part of sentiment analysis research has been monolingual and they perform poorly on the code-mixed text. We introduce methods that use multilingual and cross-lingual embeddings to transfer knowledge from monolingual text to code-mixed text for code-mixed sentiment analysis. Our methods handle code-mixed text through zero-shot learning and beat state-of-the-art English-Spanish code-mixed sentiment analysis by an absolute 3% F1-score. We are able to achieve 0.58 F1-score (without a parallel corpus) and 0.62 F1-score (with the parallel corpus) on the same benchmark in a zero-shot way as compared to 0.68 F1-score in supervised settings. Our code is publicly available on github.com/sedflix/unsacmt.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Wu, Ji, Madeleine Orr, Kurumi Aizawa und Yuhei Inoue. „Language Relativity in Legacy Literature: A Systematic Review in Multiple Languages“. Sustainability 13, Nr. 20 (14.10.2021): 11333. http://dx.doi.org/10.3390/su132011333.

Der volle Inhalt der Quelle
Annotation:
Since the Olympic Agenda 2020, legacy has been widely used as a justification for hosting the Olympic Games, through which sustainable development can be achieved for both events and host cities. To date, no universal definition of legacy has been established, which presents challenges for legacy-related international knowledge transfer among host cities. To address this gap, a multilingual systematic review of the literature regarding the concept of legacy was conducted in French, Japanese, Chinese, and English. Using English literature as a baseline, points of convergence and divergence among the languages were identified. While all four languages value the concept of legacy as an important facet of mega-events, significant differences were found within each language. This finding highlights the importance of strategies that align different cultures when promoting sustainable development of some global movements such as the Olympic legacy. Sport management is replete with international topics, such as international events and sport for development, and each topic is studied simultaneously in several languages and with potentially differing frameworks and perspectives. Thus, literature reviews that examine the English literature, exclusively, are innately limited in scope. The development of partnerships and resources that facilitate cross-lingual and cross-cultural consultation and collaboration is an important research agenda. More research is needed on knowledge translation across languages.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Guseynova, Innara. „Digital Transformation and Its Consequences for Heritage Languages“. Nizhny Novgorod Linguistics University Bulletin, Special issue (31.12.2020): 44–58. http://dx.doi.org/10.47388/2072-3490/lunn2020-si-44-58.

Der volle Inhalt der Quelle
Annotation:
The article attempts to conduct a primary analysis of the consequences of digital transformation for heritage languages which make up the cultural and historical legacy of individual ethnic communities. In a multilingual society, such a study requires an integrated approach, which takes into account the sociolinguistic parameters of various target audiences, communication channels aimed to disseminate and transfer information, discourse analyses of lin-guistic means, as well as extralinguistic factors impacting the development of different environments. It is equally important to study the specificity of the socio-cultural interaction between communicants in the professional sphere, which primarily indicates the institutional status of participants in communication, as well as their observance / nonobservance of linguistic norms. The latter seems extremely important with regard to heritage languages and their linguistic status in institutional discourse. In many respects, observance / nonobservance of linguistic norms makes it possible, on the one hand, to define the linguistic portrait of the communicant and, on the other hand, to assess the survival of national identity. Both aspects are central across various types of institutional discourse, including political, marketing, ad-vertising discourse etc. The analysis of the institutional aspects of cross-cultural and cross-lingual communication is carried out using an etiological approach that allows to determine the degree of importance of sociolinguistic parameters to achieve adequacy of socio-cultural interaction of representatives of different linguocultures. It is performed indirectly using vari-ous language pairs, in the context of heritage bilingualism, as well as interpersonal interaction. The article also expounds consequences of the global turn towards digital transformation affecting the overall knowledge in liberal arts and human sciences in general and cross-lingual and cross-cultural communication in particular. The study discusses areas of application of heritage language resources such as locus branding, image making, reports of scientific and technical achievements, etc. The article concludes by inferring the need to preserve linguistic diversity and its teleological use in various types of institutional discourse.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Shwartz, Mila, Mark Leikin und David L. Share. „Bi-literate bilingualism versus mono-literate bilingualism“. Written Language and Literacy 8, Nr. 2 (31.12.2005): 103–30. http://dx.doi.org/10.1075/wll.8.2.08shw.

Der volle Inhalt der Quelle
Annotation:
The present study compared the early Hebrew (L2) literacy development of three groups; two groups of bilinguals — bi-literate and mono-literate Russian-Hebrew speakers, and a third group of monolingual Hebrew-speakers. We predicted that bi-literacy rather than bilingualism is the key variable as regards L2 literacy learning. In a longitudinal design, a variety of linguistic, meta-linguistic and cognitive tasks were administered at the commencement of first grade, with Hebrew reading and spelling assessed at the end of the year. Results demonstrated that bi-literate bilinguals were far in advance of both mono-literate (Russian-Hebrew) bilinguals and mono-lingual Hebrew-speakers on all reading fluency measures at the end of Grade 1. Bi-literate bilinguals also showed a clear advantage over mono-literate bilingual and mono-lingual peers on all phonological awareness tasks. The mono-literate bilinguals also demonstrated some modest gains over their monolingual peers in Grade 1 reading accuracy. All three groups performed similarly on L2 linguistic tasks. These findings confirm Bialystok’s (2002) assertion that bilingualism per se may not be the most influential factor in L2 reading acquisition. Early (L1) literacy acquisition, however, can greatly enhance L2 literacy development. The present findings also suggest that the actual mechanism of cross-linguistic transfer is the insight gained into the alphabetic principle common to all alphabetic writing systems and not merely the knowledge of a specific letter-sound code such as the Roman orthography.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Imin, Gvzelnur, Mijit Ablimit, Hankiz Yilahun und Askar Hamdulla. „A Character String-Based Stemming for Morphologically Derivative Languages“. Information 13, Nr. 4 (28.03.2022): 170. http://dx.doi.org/10.3390/info13040170.

Der volle Inhalt der Quelle
Annotation:
Morphologically derivative languages form words by fusing stems and suffixes, stems are important to be extracted in order to make cross lingual alignment and knowledge transfer. As there are phonetic harmony and disharmony when linguistic particles are combined, both phonetic and morphological changes need to be analyzed. This paper proposes a multilingual stemming method that learns morpho-phonetic changes automatically based on character based embedding and sequential modeling. Firstly, the character feature embedding at the sentence level is used as input, and the BiLSTM model is used to obtain the forward and reverse context sequence, and the attention mechanism is added to this model for weight learning, and the global feature information is extracted to capture the stem and affix boundaries; finally CRF model is used to learn more information from sequence features to describe context information more effectively. In order to verify the effectiveness of the above model, the model in this paper is compared with the traditional model on two different data sets of three derivative languages: Uyghur, Kazakh and Kirghiz. The experimental results show that the model in this paper has the best stemming effect on multilingual sentence-level datasets, which leads to more effective stemming. In addition, the proposed model outperforms other traditional models, and fully consider the data characteristics, and has certain advantages with less human intervention.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Kumari, Divya, Asif Ekbal, Rejwanul Haque, Pushpak Bhattacharyya und Andy Way. „Reinforced NMT for Sentiment and Content Preservation in Low-resource Scenario“. ACM Transactions on Asian and Low-Resource Language Information Processing 20, Nr. 4 (28.06.2021): 1–27. http://dx.doi.org/10.1145/3450970.

Der volle Inhalt der Quelle
Annotation:
The preservation of domain knowledge from source to the target is crucial in any translation workflows. Hence, translation service providers that use machine translation (MT) in production could reasonably expect that the translation process should transfer both the underlying pragmatics and the semantics of the source-side sentences into the target language. However, recent studies suggest that the MT systems often fail to preserve such crucial information (e.g., sentiment, emotion, gender traits) embedded in the source text in the target. In this context, the raw automatic translations are often directly fed to other natural language processing (NLP) applications (e.g., sentiment classifier) in a cross-lingual platform. Hence, the loss of such crucial information during the translation could negatively affect the performance of such downstream NLP tasks that heavily rely on the output of the MT systems. In our current research, we carefully balance both the sides (i.e., sentiment and semantics) during translation, by controlling a global-attention-based neural MT (NMT), to generate translations that encode the underlying sentiment of a source sentence while preserving its non-opinionated semantic content. Toward this, we use a state-of-the-art reinforcement learning method, namely, actor-critic , that includes a novel reward combination module, to fine-tune the NMT system so that it learns to generate translations that are best suited for a downstream task, viz. sentiment classification while ensuring the source-side semantics is intact in the process. Experimental results for Hindi–English language pair show that our proposed method significantly improves the performance of the sentiment classifier and alongside results in an improved NMT system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ebel, S., H. Blättermann, U. Weik, J. Margraf-Stiksrud und R. Deinzer. „High Plaque Levels after Thorough Toothbrushing: What Impedes Efficacy?“ JDR Clinical & Translational Research 4, Nr. 2 (14.11.2018): 135–42. http://dx.doi.org/10.1177/2380084418813310.

Der volle Inhalt der Quelle
Annotation:
Objectives: Previous studies have shown high levels of dental plaque after toothbrushing and poor toothbrushing performance. There is a lack of evidence about what oral hygiene behavior predicts persistent plaque. The present cross-sectional study thus relates toothbrushing behavior to oral cleanliness after brushing and to gingivitis. Methods: All young adults from a central town in Germany who turned 18 y old in the year prior to the examination were invited to participate in the study. They were asked to clean their teeth to their best abilities while being filmed. Videos were analyzed regarding brushing movements (vertical, circular, horizontal, modified Bass technique) and evenness of distribution of brushing time across vestibular (labial/buccal) and palatinal (lingual/palatinal) surfaces. Dental status, gingival bleeding, and oral cleanliness after oral hygiene were assessed. Results: Ninety-eight young adults participated in the study. Gingival margins showed persistent plaque at 69.48% ± 12.31% sites (mean ± SD) after participants brushed to their best abilities. Regression analyses with the brushing movements and evenness of distribution of brushing time as predictors explained 15.2% (adjusted R2 = 0.152, P = 0.001) of the variance in marginal plaque and 19.4% (adjusted R2 = 0.194, P < 0.001) of the variance in bleeding. Evenness of distribution of brushing time was the most important behavioral predictor. Conclusion: Even when asked to perform optimal oral hygiene, young German adults distributed their brushing time across surfaces unevenly. Compared with brushing movements, this factor turned out to be of more significance when explaining the variance of plaque and bleeding. Knowledge Transfer Statement: Results of this study can help clinicians and patients understand the meaning of specific behavioral aspects of toothbrushing for oral cleanliness and oral health.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Pikuliak, Matúš. „Cross-Lingual Learning With Distributed Representations“. Proceedings of the AAAI Conference on Artificial Intelligence 32, Nr. 1 (29.04.2018). http://dx.doi.org/10.1609/aaai.v32i1.11348.

Der volle Inhalt der Quelle
Annotation:
Cross-lingual Learning can help to bring state-of-the-art deep learning solutions to smaller languages. These languages in general lack resource for training advanced neural networks. With transfer of knowledge across languages we can improve the results for various NLP tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Sen, Pawan, Rohit Sharma, Lucky Verma und Pari Tenguriya. „Empowering Multilingual AI: Cross-Lingual Transfer Learning“. International Journal of Psychosocial Rehabilitation, März 2020, 7915–18. http://dx.doi.org/10.61841/v24i3/400259.

Der volle Inhalt der Quelle
Annotation:
Multilingual Natural Language Processing (NLP) and Cross-Lingual Transfer Learning have emerged as pivotal fields in the realm of language technology. This abstract explores the essential concepts and methodologies behind these areas, shedding light on their significance in a world characterized by linguistic diversity. Multilingual NLP enables machines to process and generate text in multiple languages, breaking down communication barriers and fostering global collaboration. Cross-lingual transfer learning, on the other hand, leverages knowledge from one language to enhance NLP tasks in another, facilitating efficient resource utilization and improved model performance. The abstract highlights the growing relevance of these approaches in a multilingual and interconnected world, underscoring their potential to reshape the future of natural language understanding and communication.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Ahmat, Ahtamjan, Yating Yang, Bo Ma, Rui Dong, Kaiwen Lu und Lei Wang. „WAD-X: Improving Zero-shot Cross-lingual Transfer via Adapter-based Word Alignment“. ACM Transactions on Asian and Low-Resource Language Information Processing, 19.07.2023. http://dx.doi.org/10.1145/3610289.

Der volle Inhalt der Quelle
Annotation:
Multilingual pre-trained language models (mPLMs) have achieved remarkable performance on zero-shot cross-lingual transfer learning. However, most mPLMs implicitly encourage cross-lingual alignment in pre-training stage, making it hard to capture accurate word alignment across languages. In this paper, we propose Word-align ADapters for Cross-lingual transfer (WAD-X) to explicitly align word representations of mPLMs via language-specific subspace. Taking a mPLM as the backbone model, WAD-X constructs subspace for each source-target language pair via adapters. The adapters use statistical alignment as the prior knowledge to guide word-level aligning in the corresponding bilingual semantic subspace. We evaluate our model across a set of target languages on three zero-shot cross-lingual transfer tasks: part-of-speech tagging (POS), dependency parsing (DP), and sentiment analysis (SA). Experimental results demonstrate that our proposed model improves zero-shot cross-lingual transfer on three benchmarks, with improvements of 2.19, 2.50, and 1.61 points in POS, DP, and SA tasks over strong baselines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Wang, Deqing, Junjie Wu, Jingyuan Yang, Baoyu Jing, Wenjie Zhang, Xiaonan He und Hui Zhang. „Cross-Lingual Knowledge Transferring by Structural Correspondence and Space Transfer“. IEEE Transactions on Cybernetics, 2021, 1–12. http://dx.doi.org/10.1109/tcyb.2021.3051005.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Yu, Chuanming, Haodong Xue, Manyi Wang und Lu An. „Towards an entity relation extraction framework in the cross-lingual context“. Electronic Library ahead-of-print, ahead-of-print (03.08.2021). http://dx.doi.org/10.1108/el-10-2020-0304.

Der volle Inhalt der Quelle
Annotation:
Purpose Owing to the uneven distribution of annotated corpus among different languages, it is necessary to bridge the gap between low resource languages and high resource languages. From the perspective of entity relation extraction, this paper aims to extend the knowledge acquisition task from a single language context to a cross-lingual context, and to improve the relation extraction performance for low resource languages. Design/methodology/approach This paper proposes a cross-lingual adversarial relation extraction (CLARE) framework, which decomposes cross-lingual relation extraction into parallel corpus acquisition and adversarial adaptation relation extraction. Based on the proposed framework, this paper conducts extensive experiments in two tasks, i.e. the English-to-Chinese and the English-to-Arabic cross-lingual entity relation extraction. Findings The Macro-F1 values of the optimal models in the two tasks are 0.880 1 and 0.789 9, respectively, indicating that the proposed CLARE framework for CLARE can significantly improve the effect of low resource language entity relation extraction. The experimental results suggest that the proposed framework can effectively transfer the corpus as well as the annotated tags from English to Chinese and Arabic. This study reveals that the proposed approach is less human labour intensive and more effective in the cross-lingual entity relation extraction than the manual method. It shows that this approach has high generalizability among different languages. Originality/value The research results are of great significance for improving the performance of the cross-lingual knowledge acquisition. The cross-lingual transfer may greatly reduce the time and cost of the manual construction of the multi-lingual corpus. It sheds light on the knowledge acquisition and organization from the unstructured text in the era of big data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Novák, Attila, und Borbála Novák. „Cross-lingual transfer of knowledge in distributional language models: Experiments in Hungarian“. Acta Linguistica Academica, 22.11.2022. http://dx.doi.org/10.1556/2062.2022.00580.

Der volle Inhalt der Quelle
Annotation:
AbstractIn this paper, we argue that the very convincing performance of recent deep-neural-model-based NLP applications has demonstrated that the distributionalist approach to language description has proven to be more successful than the earlier subtle rule-based models created by the generative school. The now ubiquitous neural models can naturally handle ambiguity and achieve human-like linguistic performance with most of their training consisting only of noisy raw linguistic data without any multimodal grounding or external supervision refuting Chomsky's argument that some generic neural architecture cannot arrive at the linguistic performance exhibited by humans given the limited input available to children. In addition, we demonstrate in experiments with Hungarian as the target language that the shared internal representations in multilingually trained versions of these models make them able to transfer specific linguistic skills, including structured annotation skills, from one language to another remarkably efficiently.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Tikhonova, Maria, Vladislav Mikhailov, Dina Pisarevskaya, Valentin Malykh und Tatiana Shavrina. „Ad astra or astray: Exploring linguistic knowledge of multilingual BERT through NLI task“. Natural Language Engineering, 09.06.2022, 1–30. http://dx.doi.org/10.1017/s1351324922000225.

Der volle Inhalt der Quelle
Annotation:
Abstract Recent research has reported that standard fine-tuning approaches can be unstable due to being prone to various sources of randomness, including but not limited to weight initialization, training data order, and hardware. Such brittleness can lead to different evaluation results, prediction confidences, and generalization inconsistency of the same models independently fine-tuned under the same experimental setup. Our paper explores this problem in natural language inference, a common task in benchmarking practices, and extends the ongoing research to the multilingual setting. We propose six novel textual entailment and broad-coverage diagnostic datasets for French, German, and Swedish. Our key findings are that the mBERT model demonstrates fine-tuning instability for categories that involve lexical semantics, logic, and predicate-argument structure and struggles to learn monotonicity, negation, numeracy, and symmetry. We also observe that using extra training data only in English can enhance the generalization performance and fine-tuning stability, which we attribute to the cross-lingual transfer capabilities. However, the ratio of particular features in the additional training data might rather hurt the performance for model instances. We are publicly releasing the datasets, hoping to foster the diagnostic investigation of language models (LMs) in a cross-lingual scenario, particularly in terms of benchmarking, which might promote a more holistic understanding of multilingualism in LMs and cross-lingual knowledge transfer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Batsuren, Khuyagbaatar, Gábor Bella und Fausto Giunchiglia. „A large and evolving cognate database“. Language Resources and Evaluation, 30.05.2021. http://dx.doi.org/10.1007/s10579-021-09544-6.

Der volle Inhalt der Quelle
Annotation:
AbstractWe present CogNet, a large-scale, automatically-built database of sense-tagged cognates—words of common origin and meaning across languages. CogNet is continuously evolving: its current version contains over 8 million cognate pairs over 338 languages and 35 writing systems, with new releases already in preparation. The paper presents the algorithm and input resources used for its computation, an evaluation of the result, as well as a quantitative analysis of cognate data leading to novel insights on language diversity. Furthermore, as an example on the use of large-scale cross-lingual knowledge bases for improving the quality of multilingual applications, we present a case study on the use of CogNet for bilingual lexicon induction in the framework of cross-lingual transfer learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Rivera-Zavala, Renzo M., und Paloma Martínez. „Analyzing transfer learning impact in biomedical cross-lingual named entity recognition and normalization“. BMC Bioinformatics 22, S1 (Dezember 2021). http://dx.doi.org/10.1186/s12859-021-04247-9.

Der volle Inhalt der Quelle
Annotation:
Abstract Background The volume of biomedical literature and clinical data is growing at an exponential rate. Therefore, efficient access to data described in unstructured biomedical texts is a crucial task for the biomedical industry and research. Named Entity Recognition (NER) is the first step for information and knowledge acquisition when we deal with unstructured texts. Recent NER approaches use contextualized word representations as input for a downstream classification task. However, distributed word vectors (embeddings) are very limited in Spanish and even more for the biomedical domain. Methods In this work, we develop several biomedical Spanish word representations, and we introduce two Deep Learning approaches for pharmaceutical, chemical, and other biomedical entities recognition in Spanish clinical case texts and biomedical texts, one based on a Bi-STM-CRF model and the other on a BERT-based architecture. Results Several Spanish biomedical embeddigns together with the two deep learning models were evaluated on the PharmaCoNER and CORD-19 datasets. The PharmaCoNER dataset is composed of a set of Spanish clinical cases annotated with drugs, chemical compounds and pharmacological substances; our extended Bi-LSTM-CRF model obtains an F-score of 85.24% on entity identification and classification and the BERT model obtains an F-score of 88.80% . For the entity normalization task, the extended Bi-LSTM-CRF model achieves an F-score of 72.85% and the BERT model achieves 79.97%. The CORD-19 dataset consists of scholarly articles written in English annotated with biomedical concepts such as disorder, species, chemical or drugs, gene and protein, enzyme and anatomy. Bi-LSTM-CRF model and BERT model obtain an F-measure of 78.23% and 78.86% on entity identification and classification, respectively on the CORD-19 dataset. Conclusion These results prove that deep learning models with in-domain knowledge learned from large-scale datasets highly improve named entity recognition performance. Moreover, contextualized representations help to understand complexities and ambiguity inherent to biomedical texts. Embeddings based on word, concepts, senses, etc. other than those for English are required to improve NER tasks in other languages.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Perez, Naiara, Pablo Accuosto, Àlex Bravo, Montse Cuadros, Eva Martínez-García, Horacio Saggion und German Rigau. „Cross-lingual Semantic Annotation of Biomedical Literature: Experiments in Spanish and English“. Bioinformatics, 15.11.2019. http://dx.doi.org/10.1093/bioinformatics/btz853.

Der volle Inhalt der Quelle
Annotation:
Abstract Motivation Biomedical literature is one of the most relevant sources of information for knowledge mining in the field of Bioinformatics. In spite of English being the most widely addressed language in the field, in recent years there has been a growing interest from the natural language processing community in dealing with languages other than English. However, the availability of language resources and tools for appropriate treatment of non-English texts is lacking behind. Our research is concerned with the semantic annotation of biomedical texts in the Spanish language, which can be considered an under-resourced language where biomedical text processing is concerned. Results We have carried out experiments to assess the effectiveness of several methods for the automatic annotation of biomedical texts in Spanish. One approach is based on the linguistic analysis of Spanish texts and their annotation using an information retrieval and concept disambiguation approach. A second method takes advantage of a Spanish-English machine translation process to annotate English documents and transfer annotations back to Spanish. A third method takes advantage of the combination of both procedures. Our evaluation shows that a combined system has competitive advantages over the two individual procedures. Availability UMLSmapper (https://snlt.vicomtech.org/umlsmapper) and the annotation transfer tool (http://scientmin.taln.upf.edu/anntransfer) are freely available for research purposes as web services and/or demos. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Hettiarachchi, Hansi, Mariam Adedoyin-Olowe, Jagdev Bhogal und Mohamed Medhat Gaber. „TTL: transformer-based two-phase transfer learning for cross-lingual news event detection“. International Journal of Machine Learning and Cybernetics, 08.03.2023. http://dx.doi.org/10.1007/s13042-023-01795-9.

Der volle Inhalt der Quelle
Annotation:
AbstractToday, we have access to a vast data amount, especially on the internet. Online news agencies play a vital role in this data generation, but most of their data is unstructured, requiring an enormous effort to extract important information. Thus, automated intelligent event detection mechanisms are invaluable to the community. In this research, we focus on identifying event details at the sentence and token levels from news articles, considering their fine granularity. Previous research has proposed various approaches ranging from traditional machine learning to deep learning, targeting event detection at these levels. Among these approaches, transformer-based approaches performed best, utilising transformers’ transferability and context awareness, and achieved state-of-the-art results. However, they considered sentence and token level tasks as separate tasks even though their interconnections can be utilised for mutual task improvements. To fill this gap, we propose a novel learning strategy named Two-phase Transfer Learning (TTL) based on transformers, which allows the model to utilise the knowledge from a task at a particular data granularity for another task at different data granularity, and evaluate its performance in sentence and token level event detection. Also, we empirically evaluate how the event detection performance can be improved for different languages (high- and low-resource), involving monolingual and multilingual pre-trained transformers and language-based learning strategies along with the proposed learning strategy. Our findings mainly indicate the effectiveness of multilingual models in low-resource language event detection. Also, TTL can further improve model performance, depending on the involved tasks’ learning order and their relatedness concerning final predictions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Do, Phuc, Trung Phan, Hung Le und Brij B. Gupta. „Building a knowledge graph by using cross-lingual transfer method and distributed MinIE algorithm on apache spark“. Neural Computing and Applications, 24.11.2020. http://dx.doi.org/10.1007/s00521-020-05495-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie