Gotowa bibliografia na temat „Multi-lingual training”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Multi-lingual training”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Multi-lingual training"

1

Chi, Zewen, Li Dong, Furu Wei, Wenhui Wang, Xian-Ling Mao i Heyan Huang. "Cross-Lingual Natural Language Generation via Pre-Training". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 05 (3.04.2020): 7570–77. http://dx.doi.org/10.1609/aaai.v34i05.6256.

Pełny tekst źródła
Streszczenie:
In this work we focus on transferring supervision signals of natural language generation (NLG) tasks between multiple languages. We propose to pretrain the encoder and the decoder of a sequence-to-sequence model under both monolingual and cross-lingual settings. The pre-training objective encourages the model to represent different languages in the shared space, so that we can conduct zero-shot cross-lingual transfer. After the pre-training procedure, we use monolingual data to fine-tune the pre-trained model on downstream NLG tasks. Then the sequence-to-sequence model trained in a single language can be directly evaluated beyond that language (i.e., accepting multi-lingual input and producing multi-lingual output). Experimental results on question generation and abstractive summarization show that our model outperforms the machine-translation-based pipeline methods for zero-shot cross-lingual generation. Moreover, cross-lingual transfer improves NLG performance of low-resource languages by leveraging rich-resource language data. Our implementation and data are available at https://github.com/CZWin32768/xnlg.
Style APA, Harvard, Vancouver, ISO itp.
2

Cao, Yue, Xiaojun Wan, Jinge Yao i Dian Yu. "MultiSumm: Towards a Unified Model for Multi-Lingual Abstractive Summarization". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 01 (3.04.2020): 11–18. http://dx.doi.org/10.1609/aaai.v34i01.5328.

Pełny tekst źródła
Streszczenie:
Automatic text summarization aims at producing a shorter version of the input text that conveys the most important information. However, multi-lingual text summarization, where the goal is to process texts in multiple languages and output summaries in the corresponding languages with a single model, has been rarely studied. In this paper, we present MultiSumm, a novel multi-lingual model for abstractive summarization. The MultiSumm model uses the following training regime: (I) multi-lingual learning that contains language model training, auto-encoder training, translation and back-translation training, and (II) joint summary generation training. We conduct experiments on summarization datasets for five rich-resource languages: English, Chinese, French, Spanish, and German, as well as two low-resource languages: Bosnian and Croatian. Experimental results show that our proposed model significantly outperforms a multi-lingual baseline model. Specifically, our model achieves comparable or even better performance than models trained separately on each language. As an additional contribution, we construct the first summarization dataset for Bosnian and Croatian, containing 177,406 and 204,748 samples, respectively.
Style APA, Harvard, Vancouver, ISO itp.
3

Kovacic, Michael, i Karl Cunningham. "Effective Electrical Safety Program Training in Multi-Lingual/Cultural Environments". IEEE Transactions on Industry Applications 55, nr 4 (lipiec 2019): 4384–88. http://dx.doi.org/10.1109/tia.2019.2907883.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zhan, Qingran, Xiang Xie, Chenguang Hu, Juan Zuluaga-Gomez, Jing Wang i Haobo Cheng. "Domain-Adversarial Based Model with Phonological Knowledge for Cross-Lingual Speech Recognition". Electronics 10, nr 24 (20.12.2021): 3172. http://dx.doi.org/10.3390/electronics10243172.

Pełny tekst źródła
Streszczenie:
Phonological-based features (articulatory features, AFs) describe the movements of the vocal organ which are shared across languages. This paper investigates a domain-adversarial neural network (DANN) to extract reliable AFs, and different multi-stream techniques are used for cross-lingual speech recognition. First, a novel universal phonological attributes definition is proposed for Mandarin, English, German and French. Then a DANN-based AFs detector is trained using source languages (English, German and French). When doing the cross-lingual speech recognition, the AFs detectors are used to transfer the phonological knowledge from source languages (English, German and French) to the target language (Mandarin). Two multi-stream approaches are introduced to fuse the acoustic features and cross-lingual AFs. In addition, the monolingual AFs system (i.e., the AFs are directly extracted from the target language) is also investigated. Experiments show that the performance of the AFs detector can be improved by using convolutional neural networks (CNN) with a domain-adversarial learning method. The multi-head attention (MHA) based multi-stream can reach the best performance compared to the baseline, cross-lingual adaptation approach, and other approaches. More specifically, the MHA-mode with cross-lingual AFs yields significant improvements over monolingual AFs with the restriction of training data size and, which can be easily extended to other low-resource languages.
Style APA, Harvard, Vancouver, ISO itp.
5

Zhang, Mozhi, Yoshinari Fujinuma i Jordan Boyd-Graber. "Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 05 (3.04.2020): 9547–54. http://dx.doi.org/10.1609/aaai.v34i05.6500.

Pełny tekst źródła
Streszczenie:
Text classification must sometimes be applied in a low-resource language with no labeled training data. However, training data may be available in a related language. We investigate whether character-level knowledge transfer from a related language helps text classification. We present a cross-lingual document classification framework (caco) that exploits cross-lingual subword similarity by jointly training a character-based embedder and a word-based classifier. The embedder derives vector representations for input words from their written forms, and the classifier makes predictions based on the word vectors. We use a joint character representation for both the source language and the target language, which allows the embedder to generalize knowledge about source language words to target language words with similar forms. We propose a multi-task objective that can further improve the model if additional cross-lingual or monolingual resources are available. Experiments confirm that character-level knowledge transfer is more data-efficient than word-level transfer between related languages.
Style APA, Harvard, Vancouver, ISO itp.
6

Guinzoni, Roberta. "Walgreens Boots Alliance goes multi-lingual through e-learning". Human Resource Management International Digest 23, nr 7 (12.10.2015): 5–8. http://dx.doi.org/10.1108/hrmid-08-2015-0138.

Pełny tekst źródła
Streszczenie:
Purpose – Explains how Walgreens Boots Alliance teamed up with Rosetta Stone on a digital language-learning program that is bringing many and significant benefits to individual employees and the company as a whole. Design/methodology/approach – Reveals that the aims were to develop English fluency for the non-English native speakers in the organization that work across countries and divisions and build language skills in at least one other main language spoken widely within the business. Provides a series of tips for anyone wanting to gain the far-reaching benefits of language learning in their organization. Findings – Describes how the e-learning course includes live online sessions with native-speaking tutors, conversation practice and games. Explains that the company also plans to supplement the structured digital-based learning with practice sessions in a work setting. These are planned to be so informal that people do not feel like they are learning or being taught. They will be social, networking occasions, such as “lunch in French” or other scheduled get-togethers where people are able to practice the language that they are learning. Practical implications – Highlights the importance of being realistic about time, supporting employees facing a digital challenge, getting sponsor support, integrating the continuous-learning concept into the organization, and using ambassadors. Social implications – Advances the view that a mastery of foreign languages is essential for successful business collaboration across countries and cultures, particularly for managers and directors. In providing language training for these serious learners, the company is not only helping its employees to understand each other better and relate to each other’s background and culture but also growing the business. Originality/value – Concludes that a digital-learning approach to language learning can help businesses to meet their learning goals and deliver training that is interactive, convenient and fun for learners.
Style APA, Harvard, Vancouver, ISO itp.
7

Pinto da Costa, Mariana. "Conducting Cross-Cultural, Multi-Lingual and Multi-Country Focus Groups: Guidance for Researchers". International Journal of Qualitative Methods 20 (styczeń 2021): 160940692110499. http://dx.doi.org/10.1177/16094069211049929.

Pełny tekst źródła
Streszczenie:
Conducting cross-cultural, multi-lingual and multi-country focus groups represents unique logistic and analytical challenges. However, there is little guidance for the necessary considerations required for such international focus groups. Based on the author’s experience of conducting such research, this publication documents the different stages of planning, fieldwork, analysis and dissemination, and how to mitigate possible challenges and overcome them. It is essential to set up an adequate research team with the linguist and cultural background required. All researchers should have the necessary training in qualitative methods and follow a standardised approach in the facilitation of focus groups across the different countries and in the analysis of the data, ideally in their original languages.
Style APA, Harvard, Vancouver, ISO itp.
8

Fuad, Ahlam, i Maha Al-Yahya. "AraConv: Developing an Arabic Task-Oriented Dialogue System Using Multi-Lingual Transformer Model mT5". Applied Sciences 12, nr 4 (11.02.2022): 1881. http://dx.doi.org/10.3390/app12041881.

Pełny tekst źródła
Streszczenie:
Task-oriented dialogue systems (DS) are designed to help users perform daily activities using natural language. Task-oriented DS for English language have demonstrated promising performance outcomes; however, developing such systems to support Arabic remains a challenge. This challenge is mainly due to the lack of Arabic dialogue datasets. This study introduces the first Arabic end-to-end generative model for task-oriented DS (AraConv), which uses the multi-lingual transformer model mT5 with different settings. We also present an Arabic dialogue dataset (Arabic-TOD) and used it to train and test the proposed AraConv model. The results obtained are reasonable compared to those reported in the studies of English and Chinese using the same mono-lingual settings. To avoid problems associated with a small training dataset and to improve the AraConv model’s results, we suggest joint-training, in which the model is jointly trained on Arabic dialogue data and data from one or two high-resource languages such as English and Chinese. The findings indicate the AraConv model performed better in the joint-training setting than in the mono-lingual setting. The results obtained from AraConv on the Arabic dialogue dataset provide a baseline for other researchers to build robust end-to-end Arabic task-oriented DS that can engage with complex scenarios.
Style APA, Harvard, Vancouver, ISO itp.
9

Yan, Huijiong, Tao Qian, Liang Xie i Shanguang Chen. "Unsupervised cross-lingual model transfer for named entity recognition with contextualized word representations". PLOS ONE 16, nr 9 (21.09.2021): e0257230. http://dx.doi.org/10.1371/journal.pone.0257230.

Pełny tekst źródła
Streszczenie:
Named entity recognition (NER) is one fundamental task in the natural language processing (NLP) community. Supervised neural network models based on contextualized word representations can achieve highly-competitive performance, which requires a large-scale manually-annotated corpus for training. While for the resource-scarce languages, the construction of such as corpus is always expensive and time-consuming. Thus, unsupervised cross-lingual transfer is one good solution to address the problem. In this work, we investigate the unsupervised cross-lingual NER with model transfer based on contextualized word representations, which greatly advances the cross-lingual NER performance. We study several model transfer settings of the unsupervised cross-lingual NER, including (1) different types of the pretrained transformer-based language models as input, (2) the exploration strategies of the multilingual contextualized word representations, and (3) multi-source adaption. In particular, we propose an adapter-based word representation method combining with parameter generation network (PGN) better to capture the relationship between the source and target languages. We conduct experiments on a benchmark ConLL dataset involving four languages to simulate the cross-lingual setting. Results show that we can obtain highly-competitive performance by cross-lingual model transfer. In particular, our proposed adapter-based PGN model can lead to significant improvements for cross-lingual NER.
Style APA, Harvard, Vancouver, ISO itp.
10

Xiang, Lu, Junnan Zhu, Yang Zhao, Yu Zhou i Chengqing Zong. "Robust Cross-lingual Task-oriented Dialogue". ACM Transactions on Asian and Low-Resource Language Information Processing 20, nr 6 (30.11.2021): 1–24. http://dx.doi.org/10.1145/3457571.

Pełny tekst źródła
Streszczenie:
Cross-lingual dialogue systems are increasingly important in e-commerce and customer service due to the rapid progress of globalization. In real-world system deployment, machine translation (MT) services are often used before and after the dialogue system to bridge different languages. However, noises and errors introduced in the MT process will result in the dialogue system's low robustness, making the system's performance far from satisfactory. In this article, we propose a novel MT-oriented noise enhanced framework that exploits multi-granularity MT noises and injects such noises into the dialogue system to improve the dialogue system's robustness. Specifically, we first design a method to automatically construct multi-granularity MT-oriented noises and multi-granularity adversarial examples, which contain abundant noise knowledge oriented to MT. Then, we propose two strategies to incorporate the noise knowledge: (i) Utterance-level adversarial learning and (ii) Knowledge-level guided method. The former adopts adversarial learning to learn a perturbation-invariant encoder, guiding the dialogue system to learn noise-independent hidden representations. The latter explicitly incorporates the multi-granularity noises, which contain the noise tokens and their possible correct forms, into the training and inference process, thus improving the dialogue system's robustness. Experimental results on three dialogue models, two dialogue datasets, and two language pairs have shown that the proposed framework significantly improves the performance of the cross-lingual dialogue system.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Multi-lingual training"

1

Dehouck, Mathieu. "Multi-lingual dependency parsing : word representation and joint training for syntactic analysis". Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I019/document.

Pełny tekst źródła
Streszczenie:
Les parsers en dépendances modernes ont des résultats comparables à ceux d'experts humains. Cependant, ils sont encore gourmands en données annotées et ces données ne sont disponibles que pour quelques langues. Pour rendre l'analyse syntaxique accessible aussi aux langues peu dotées, de nombreuses méthodes sont apparues comme le transfert de modèle ou d'annotation. Dans cette thèse, nous proposons de nouvelles méthodes de partage de l'information entre plusieurs langues en utilisant leurs traits grammaticaux communs.Nous utilisons cette morphologie partagée pour apprendre des représentations de mots délexicalisés qui aideront l'apprentissage de modèles d'analyse syntaxique. Nous proposons aussi une nouvelle méthode d'apprentissage nommée apprentissage phylogénétique qui utilise l'arbre généalogique des langues pour guider l'apprentissage des modèles. Enfin, à l'aide de notre mesure de la complexité morphosyntaxique nous étudions le rôle de la morphologie pour l'analyse en dépendances
While modern dependency parsers have become as good as human experts, they still rely heavily on hand annotated training examples which are available for a handful of languages only. Several methods such as model and annotation transfer have been proposed to make high quality syntactic analysis available to low resourced languages as well. In this thesis, we propose new approaches for sharing information across languages relying on their shared morphological features. In a fist time, we propose to use shared morphological features to induce cross-lingual delexicalised word representations that help learning syntactic analysis models. Then, we propose a new multi-task learning framework called phylogenetic learning which learns models for related tasks/languages guided by the tasks/languages evolutionary tree. Eventually, with our new measure of morphosyntactic complexity we investigate the intrinsic role of morphological information for dependency parsing
Style APA, Harvard, Vancouver, ISO itp.
2

Anoop, C. S. "Automatic speech recognition for low-resource Indian languages". Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6195.

Pełny tekst źródła
Streszczenie:
Building good models for automatic speech recognition (ASR) requires large amounts of annotated speech data. Recent advancements in end-to-end speech recognition have aggravated the need for data. However, most Indian languages are low-resourced and lack enough training data to build robust and efficient ASR systems. Despite the challenges associated with the scarcity of data, Indian languages offer some unique characteristics that can be utilized to improve speech recognition in low-resource settings. Most languages have an overlapping phoneme set and a strong correspondence between their character sets and pronunciations. Though the writing systems are different, the Unicode tables are organized so that similar-sounding characters occur at the same offset in the range assigned for each language. In the first part of the thesis, we try to exploit the pronunciation similarities among multiple Indian languages by using a shared set of pronunciation-based tokens. We evaluate the ASR performance for four choices of tokens, namely Epitran, Indian language speech sound label set (ILSL12), Sanskrit phonetic library encoding (SLP1), and SLP1-M (SLP1 modified to include some contextual pronunciation rules). Using Sanskrit as a representative Indian language, we conduct monolingual experiments to evaluate their ASR performance. Conventional Gaussian mixture model (GMM) - hidden Markov model (HMM) approaches, and neural network models leveraging on the alignments from the conventional models benefit from the stringent pronunciation modeling in SLP1-M. However, end-to-end (E2E) trained time-delay neural networks (TDNN) yield the best results with SLP1. Most Indian languages are spoken in units of syllables. However, syllables have never been used for E2E speech recognition in the Indian language, to the best of our knowledge. So we compare token units like native script characters, SLP1, and syllables in the monolingual settings for multiple Indian languages. We also evaluate the performance of sub-word units generated with the byte pair encoding (BPE) and unigram language model (ULM) algorithms on these basic units. We find that syllable-based sub-word units are promising alternatives to graphemes in monolingual speech recognition if the dataset fairly covers the syllables in the language. The benefits of syllable sub-words in E2E speech recognition may be attributed to the reduced effective length of the token sequences. We also investigate if the models trained on different token units can complement each other in a pretraining-fine-tuning setup. However, the performance improvements in such a setup with syllable-BPE and SLP1 character tokens are minor compared to the syllable-BPE trained model. We also investigate the suitability of syllable-based units in a cross-lingual training setup for a low-resource target language. However, the model faces convergence issues. SLP1 characters are a better choice in crosslingual transfer learning than the syllable sub-words. In the first part, we also verify the effectiveness of SpecAugment in an extremely low-resource setting. We apply SpecAugment on the log-mel spectrogram for data augmentation in a limited dataset of just 5.5 hours. The assumption is that the target language has no closely related high-resource source language, and only very limited data is available. SpecAugment provides an absolute improvement of 13.86% in WER on a connectionist temporal classification (CTC) based E2E system with weighted finite-state transducer (WFST) decoding. Based on this result, we extensively use SpecAugment in our experiments with E2E models. In the second part of the thesis, we address the strategies for improving the performance of ASR systems in low-resource scenarios (target languages), exploiting the annotated data from high-resource languages (source languages). Based on the results in the first part of the thesis, we extensively use SLP1 tokens in multilingual experiments on E2E networks. We specifically explore the following settings: (a) Labeled audio data is not available in the target language. Only a limited amount of unlabeled data is available. We propose using unsupervised domain adaptation (UDA) approaches in a hybrid DNN(deep neural network)-HMM setting to build ASR systems for low-resource languages sharing a common acoustic space with high-resource languages. We explore two architectures: i) domain adversarial training using gradient reversal layer (GRL) and ii) domain separation network (DSN). The GRL and DSN architectures give absolute improvements of 6.71% and 7.32%, respectively, in word error rate (WER) over the baseline DNN with Hindi in the source domain and Sanskrit in the target domain. We also find that a judicious selection of the source language yields further improvements. (b) Target language has only a small amount of labeled data and has some amount of text data to build language models. We try to benefit from the available data in high-resource languages through a common shared label set to build unified acoustic (AM) and language models (LM). We study and compare the performance of these unified models with that of the monolingual model in low-resource conditions. The unified language-agnostic AM + LM performs better than monolingual AM + LM in cases where (a) only limited speech data is available for training the acoustic models and (b) the speech data is from domains different from that used in training. Multilingual AM + monolingual LM performs the best in general. However, from the results, applying unified models directly (without fine-tuning) to unseen languages does not seem to be a good choice. (c) There are N target languages with limited training data and several source languages with large training sets. We explore the usefulness of model-agnostic meta-learning (MAML) pre-training for Indian languages and study the importance of selection of the source languages. We find that MAML beats joint multilingual pretraining by an average of 5.4% in CER and 20.3% in WER with just five epochs of fine-tuning. Moreover, MAML achieves performances similar to joint multilingual training with just 25% of the training data. Similarity with the source languages impacts the target language’s ASR performance. We propose a text-similarity-based loss-weighting scheme to exploit this artifact. We find absolute improvements of 1% (on average) in WER with the loss-weighing scheme. The main contributions of the thesis are: 1. Finding that the use of SLP1 tokens as a common label set for Indian languages helps to remove the redundancy involved in pooling the characters from multiple languages. 2. Exploring for the first time (to the best of our knowledge) syllable-based token units for E2E speech recognition in Indian languages. We find that they are suitable only for monolingual ASR systems. 3. Formulating the ASR in a low-resource language lacking labeled data (for the first time) as an unsupervised domain adaptation problem from a related high-resource language. 4. Exploring for the first time both unified acoustic and language models in a multilingual ASR for Indian languages. The scheme has shown success in cases where the data for acoustic modeling is limited and in settings where the test data is out-of-domain. 5. Proposing a textual similarity-based loss-weighing scheme for MAML pretraining which improves the performance of vanilla MAML models.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Multi-lingual training"

1

Chatrik, Balbir. The obstacle course: Experiences of multi-lingual trainees on youth training and employment training. London: Youthaid, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Multi-lingual training"

1

Landon-Smith, Kristine, i Chris Hay. "Empowering the somatically othered actor through multi-lingual improvisation in training". W Stages of Reckoning, 149–63. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003032076-12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Nouza, Jan, i Radek Safarik. "Parliament Archives Used for Automatic Training of Multi-lingual Automatic Speech Recognition Systems". W Text, Speech, and Dialogue, 174–82. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64206-2_20.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hagans, Kristi S., i Catherine Richards-Tutor. "Interdisciplinary Training in Intensive Intervention for Students With Disabilities and Multi-Lingual Youth". W Handbook of Research on Interdisciplinary Preparation for Equitable Special Education, 296–317. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6438-0.ch015.

Pełny tekst źródła
Streszczenie:
This chapter describes an interdisciplinary training project funded by OSEP to support school psychology and dual credential teacher candidates to effectively work with and provide inclusive educational supports to students with disabilities, including multilingual youth, with intensive academic needs. The authors describe the need for the project and provide an overview of the conceptual framework and training components, such as didactic and experiential learning, as well as key elements of the project, including evidence-based assessment and instruction and culturally responsive and sustaining practices. Formative and summative measures used to measure candidate outcomes are described, and preliminary results are provided.
Style APA, Harvard, Vancouver, ISO itp.
4

Tsurutani, Chiharu. "Computer-Assisted Pronunciation Training and Assessment (CAPTA) Programs". W Computer-Assisted Foreign Language Teaching and Learning, 276–88. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2821-2.ch016.

Pełny tekst źródła
Streszczenie:
Pedagogical support for pronunciation tends to fall behind other areas of applied linguistics and CALL (Computer-Assisted Language Learning) due to technological difficulty in speech recognition and the lack of knowledge in phonetics of both language teachers and learners. This chapter discusses the gap between the need for pronunciation training and the capacity of CAPTA programs in terms of phonetic and phonological development of second-language (L2) learners. Pronunciation difficulties experienced by L2 learners will be explained cross-linguistically, and the most recent developments in the production of CAPTA programs will be discussed in relation to the type of pronunciation errors dealt with by these programs. Considering that native-like pronunciation is no longer required in the current multi-lingual society, the author proposes achievable and pedagogically sound goals for the development of CAPTA programs as well as for L2 learners.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Multi-lingual training"

1

Li, Shicheng, Pengcheng Yang, Fuli Luo i Jun Xie. "Multi-Granularity Contrasting for Cross-Lingual Pre-Training". W Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-acl.149.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Qin, Libo, Minheng Ni, Yue Zhang i Wanxiang Che. "CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot Cross-Lingual NLP". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/533.

Pełny tekst źródła
Streszczenie:
Multi-lingual contextualized embeddings, such as multilingual-BERT (mBERT), have shown success in a variety of zero-shot cross-lingual tasks. However, these models are limited by having inconsistent contextualized representations of subwords across different languages. Existing work addresses this issue by bilingual projection and fine-tuning technique. We propose a data augmentation framework to generate multi-lingual code-switching data to fine-tune mBERT, which encourages model to align representations from source and multiple target languages once by mixing their context information. Compared with the existing work, our method does not rely on bilingual sentences for training, and requires only one training process for multiple target languages. Experimental results on five tasks with 19 languages show that our method leads to significantly improved performances for all the tasks compared with mBERT.
Style APA, Harvard, Vancouver, ISO itp.
3

Soky, Kak, Sheng Li, Tatsuya Kawahara i Sopheap Seng. "Multi-lingual Transformer Training for Khmer Automatic Speech Recognition". W 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2019. http://dx.doi.org/10.1109/apsipaasc47483.2019.9023137.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Saiko, Masahiro, Hitoshi Yamamoto, Ryosuke Isotani i Chiori Hori. "Efficient multi-lingual unsupervised acoustic model training under mismatch conditions". W 2014 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2014. http://dx.doi.org/10.1109/slt.2014.7078544.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Gessler, Luke, i Amir Zeldes. "MicroBERT: Effective Training of Low-resource Monolingual BERTs through Parameter Reduction and Multitask Learning". W Proceedings of the The 2nd Workshop on Multi-lingual Representation Learning (MRL). Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.mrl-1.9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Conceição, Jhonatas Santos de Jesus, Allan Pinto, Luis Decker, Jose Luis Flores Campana, Manuel Cordova Neira, Andrezza A. Dos Santos, Helio Pedrini i Ricardo Torres. "Multi-Lingual Text Localization via Language-Specific Convolutional Neural Networks". W XXXII Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sibgrapi.est.2019.8333.

Pełny tekst źródła
Streszczenie:
Scene text localization and recognition is a topic in computer vision that aims to delimit candidate regions in an input image containing incidental scene text elements. The challenge of this research consists in devising detectors capable of dealing with a wide range of variability, such as font size, font style, color, complex background, text in different languages, among others. This work presents a comparison between two strategies of building classification models, based on a Convolution Neural Network method, to detect textual elements in multiple languages in images: (i) classification model built on a multi-lingual training scenario; and (ii) classification model built on a language-specific training scenario. The experiments designed in this work indicate that language-specific model outperforms the classification model trained over a multi-lingual scenario, with an improvement of 14.79%, 8.94%, and 11.43%, in terms of precision, recall, and F-measure values, respectively.
Style APA, Harvard, Vancouver, ISO itp.
7

He, Xiaodong, Li Deng, Dilek Hakkani-Tur i Gokhan Tur. "Multi-style adaptive training for robust cross-lingual spoken language understanding". W ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6639292.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Masumura, Ryo, Yusuke Shinohara, Ryuichiro Higashinaka i Yushi Aono. "Adversarial Training for Multi-task and Multi-lingual Joint Modeling of Utterance Intent Classification". W Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1064.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Lai, Siyu, Hui Huang, Dong Jing, Yufeng Chen, Jinan Xu i Jian Liu. "Saliency-based Multi-View Mixed Language Training for Zero-shot Cross-lingual Classification". W Findings of the Association for Computational Linguistics: EMNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-emnlp.55.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Barry, James, Joachim Wagner i Jennifer Foster. "Cross-lingual Parsing with Polyglot Training and Multi-treebank Learning: A Faroese Case Study". W Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-6118.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii