Gotowa bibliografia na temat „Multi-lingual training”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Multi-lingual training”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Multi-lingual training"
Chi, Zewen, Li Dong, Furu Wei, Wenhui Wang, Xian-Ling Mao i Heyan Huang. "Cross-Lingual Natural Language Generation via Pre-Training". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 05 (3.04.2020): 7570–77. http://dx.doi.org/10.1609/aaai.v34i05.6256.
Pełny tekst źródłaCao, Yue, Xiaojun Wan, Jinge Yao i Dian Yu. "MultiSumm: Towards a Unified Model for Multi-Lingual Abstractive Summarization". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 01 (3.04.2020): 11–18. http://dx.doi.org/10.1609/aaai.v34i01.5328.
Pełny tekst źródłaKovacic, Michael, i Karl Cunningham. "Effective Electrical Safety Program Training in Multi-Lingual/Cultural Environments". IEEE Transactions on Industry Applications 55, nr 4 (lipiec 2019): 4384–88. http://dx.doi.org/10.1109/tia.2019.2907883.
Pełny tekst źródłaZhan, Qingran, Xiang Xie, Chenguang Hu, Juan Zuluaga-Gomez, Jing Wang i Haobo Cheng. "Domain-Adversarial Based Model with Phonological Knowledge for Cross-Lingual Speech Recognition". Electronics 10, nr 24 (20.12.2021): 3172. http://dx.doi.org/10.3390/electronics10243172.
Pełny tekst źródłaZhang, Mozhi, Yoshinari Fujinuma i Jordan Boyd-Graber. "Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 05 (3.04.2020): 9547–54. http://dx.doi.org/10.1609/aaai.v34i05.6500.
Pełny tekst źródłaGuinzoni, Roberta. "Walgreens Boots Alliance goes multi-lingual through e-learning". Human Resource Management International Digest 23, nr 7 (12.10.2015): 5–8. http://dx.doi.org/10.1108/hrmid-08-2015-0138.
Pełny tekst źródłaPinto da Costa, Mariana. "Conducting Cross-Cultural, Multi-Lingual and Multi-Country Focus Groups: Guidance for Researchers". International Journal of Qualitative Methods 20 (styczeń 2021): 160940692110499. http://dx.doi.org/10.1177/16094069211049929.
Pełny tekst źródłaFuad, Ahlam, i Maha Al-Yahya. "AraConv: Developing an Arabic Task-Oriented Dialogue System Using Multi-Lingual Transformer Model mT5". Applied Sciences 12, nr 4 (11.02.2022): 1881. http://dx.doi.org/10.3390/app12041881.
Pełny tekst źródłaYan, Huijiong, Tao Qian, Liang Xie i Shanguang Chen. "Unsupervised cross-lingual model transfer for named entity recognition with contextualized word representations". PLOS ONE 16, nr 9 (21.09.2021): e0257230. http://dx.doi.org/10.1371/journal.pone.0257230.
Pełny tekst źródłaXiang, Lu, Junnan Zhu, Yang Zhao, Yu Zhou i Chengqing Zong. "Robust Cross-lingual Task-oriented Dialogue". ACM Transactions on Asian and Low-Resource Language Information Processing 20, nr 6 (30.11.2021): 1–24. http://dx.doi.org/10.1145/3457571.
Pełny tekst źródłaRozprawy doktorskie na temat "Multi-lingual training"
Dehouck, Mathieu. "Multi-lingual dependency parsing : word representation and joint training for syntactic analysis". Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I019/document.
Pełny tekst źródłaWhile modern dependency parsers have become as good as human experts, they still rely heavily on hand annotated training examples which are available for a handful of languages only. Several methods such as model and annotation transfer have been proposed to make high quality syntactic analysis available to low resourced languages as well. In this thesis, we propose new approaches for sharing information across languages relying on their shared morphological features. In a fist time, we propose to use shared morphological features to induce cross-lingual delexicalised word representations that help learning syntactic analysis models. Then, we propose a new multi-task learning framework called phylogenetic learning which learns models for related tasks/languages guided by the tasks/languages evolutionary tree. Eventually, with our new measure of morphosyntactic complexity we investigate the intrinsic role of morphological information for dependency parsing
Anoop, C. S. "Automatic speech recognition for low-resource Indian languages". Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6195.
Pełny tekst źródłaKsiążki na temat "Multi-lingual training"
Chatrik, Balbir. The obstacle course: Experiences of multi-lingual trainees on youth training and employment training. London: Youthaid, 1992.
Znajdź pełny tekst źródłaCzęści książek na temat "Multi-lingual training"
Landon-Smith, Kristine, i Chris Hay. "Empowering the somatically othered actor through multi-lingual improvisation in training". W Stages of Reckoning, 149–63. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003032076-12.
Pełny tekst źródłaNouza, Jan, i Radek Safarik. "Parliament Archives Used for Automatic Training of Multi-lingual Automatic Speech Recognition Systems". W Text, Speech, and Dialogue, 174–82. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64206-2_20.
Pełny tekst źródłaHagans, Kristi S., i Catherine Richards-Tutor. "Interdisciplinary Training in Intensive Intervention for Students With Disabilities and Multi-Lingual Youth". W Handbook of Research on Interdisciplinary Preparation for Equitable Special Education, 296–317. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6438-0.ch015.
Pełny tekst źródłaTsurutani, Chiharu. "Computer-Assisted Pronunciation Training and Assessment (CAPTA) Programs". W Computer-Assisted Foreign Language Teaching and Learning, 276–88. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2821-2.ch016.
Pełny tekst źródłaStreszczenia konferencji na temat "Multi-lingual training"
Li, Shicheng, Pengcheng Yang, Fuli Luo i Jun Xie. "Multi-Granularity Contrasting for Cross-Lingual Pre-Training". W Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-acl.149.
Pełny tekst źródłaQin, Libo, Minheng Ni, Yue Zhang i Wanxiang Che. "CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot Cross-Lingual NLP". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/533.
Pełny tekst źródłaSoky, Kak, Sheng Li, Tatsuya Kawahara i Sopheap Seng. "Multi-lingual Transformer Training for Khmer Automatic Speech Recognition". W 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2019. http://dx.doi.org/10.1109/apsipaasc47483.2019.9023137.
Pełny tekst źródłaSaiko, Masahiro, Hitoshi Yamamoto, Ryosuke Isotani i Chiori Hori. "Efficient multi-lingual unsupervised acoustic model training under mismatch conditions". W 2014 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2014. http://dx.doi.org/10.1109/slt.2014.7078544.
Pełny tekst źródłaGessler, Luke, i Amir Zeldes. "MicroBERT: Effective Training of Low-resource Monolingual BERTs through Parameter Reduction and Multitask Learning". W Proceedings of the The 2nd Workshop on Multi-lingual Representation Learning (MRL). Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.mrl-1.9.
Pełny tekst źródłaConceição, Jhonatas Santos de Jesus, Allan Pinto, Luis Decker, Jose Luis Flores Campana, Manuel Cordova Neira, Andrezza A. Dos Santos, Helio Pedrini i Ricardo Torres. "Multi-Lingual Text Localization via Language-Specific Convolutional Neural Networks". W XXXII Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sibgrapi.est.2019.8333.
Pełny tekst źródłaHe, Xiaodong, Li Deng, Dilek Hakkani-Tur i Gokhan Tur. "Multi-style adaptive training for robust cross-lingual spoken language understanding". W ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6639292.
Pełny tekst źródłaMasumura, Ryo, Yusuke Shinohara, Ryuichiro Higashinaka i Yushi Aono. "Adversarial Training for Multi-task and Multi-lingual Joint Modeling of Utterance Intent Classification". W Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1064.
Pełny tekst źródłaLai, Siyu, Hui Huang, Dong Jing, Yufeng Chen, Jinan Xu i Jian Liu. "Saliency-based Multi-View Mixed Language Training for Zero-shot Cross-lingual Classification". W Findings of the Association for Computational Linguistics: EMNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-emnlp.55.
Pełny tekst źródłaBarry, James, Joachim Wagner i Jennifer Foster. "Cross-lingual Parsing with Polyglot Training and Multi-treebank Learning: A Faroese Case Study". W Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-6118.
Pełny tekst źródła