Добірка наукової літератури з теми "Multi-lingual training"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Multi-lingual training".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Multi-lingual training"
Chi, Zewen, Li Dong, Furu Wei, Wenhui Wang, Xian-Ling Mao, and Heyan Huang. "Cross-Lingual Natural Language Generation via Pre-Training." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7570–77. http://dx.doi.org/10.1609/aaai.v34i05.6256.
Повний текст джерелаCao, Yue, Xiaojun Wan, Jinge Yao, and Dian Yu. "MultiSumm: Towards a Unified Model for Multi-Lingual Abstractive Summarization." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 11–18. http://dx.doi.org/10.1609/aaai.v34i01.5328.
Повний текст джерелаKovacic, Michael, and Karl Cunningham. "Effective Electrical Safety Program Training in Multi-Lingual/Cultural Environments." IEEE Transactions on Industry Applications 55, no. 4 (July 2019): 4384–88. http://dx.doi.org/10.1109/tia.2019.2907883.
Повний текст джерелаZhan, Qingran, Xiang Xie, Chenguang Hu, Juan Zuluaga-Gomez, Jing Wang, and Haobo Cheng. "Domain-Adversarial Based Model with Phonological Knowledge for Cross-Lingual Speech Recognition." Electronics 10, no. 24 (December 20, 2021): 3172. http://dx.doi.org/10.3390/electronics10243172.
Повний текст джерелаZhang, Mozhi, Yoshinari Fujinuma, and Jordan Boyd-Graber. "Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9547–54. http://dx.doi.org/10.1609/aaai.v34i05.6500.
Повний текст джерелаGuinzoni, Roberta. "Walgreens Boots Alliance goes multi-lingual through e-learning." Human Resource Management International Digest 23, no. 7 (October 12, 2015): 5–8. http://dx.doi.org/10.1108/hrmid-08-2015-0138.
Повний текст джерелаPinto da Costa, Mariana. "Conducting Cross-Cultural, Multi-Lingual and Multi-Country Focus Groups: Guidance for Researchers." International Journal of Qualitative Methods 20 (January 2021): 160940692110499. http://dx.doi.org/10.1177/16094069211049929.
Повний текст джерелаFuad, Ahlam, and Maha Al-Yahya. "AraConv: Developing an Arabic Task-Oriented Dialogue System Using Multi-Lingual Transformer Model mT5." Applied Sciences 12, no. 4 (February 11, 2022): 1881. http://dx.doi.org/10.3390/app12041881.
Повний текст джерелаYan, Huijiong, Tao Qian, Liang Xie, and Shanguang Chen. "Unsupervised cross-lingual model transfer for named entity recognition with contextualized word representations." PLOS ONE 16, no. 9 (September 21, 2021): e0257230. http://dx.doi.org/10.1371/journal.pone.0257230.
Повний текст джерелаXiang, Lu, Junnan Zhu, Yang Zhao, Yu Zhou, and Chengqing Zong. "Robust Cross-lingual Task-oriented Dialogue." ACM Transactions on Asian and Low-Resource Language Information Processing 20, no. 6 (November 30, 2021): 1–24. http://dx.doi.org/10.1145/3457571.
Повний текст джерелаДисертації з теми "Multi-lingual training"
Dehouck, Mathieu. "Multi-lingual dependency parsing : word representation and joint training for syntactic analysis." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I019/document.
Повний текст джерелаWhile modern dependency parsers have become as good as human experts, they still rely heavily on hand annotated training examples which are available for a handful of languages only. Several methods such as model and annotation transfer have been proposed to make high quality syntactic analysis available to low resourced languages as well. In this thesis, we propose new approaches for sharing information across languages relying on their shared morphological features. In a fist time, we propose to use shared morphological features to induce cross-lingual delexicalised word representations that help learning syntactic analysis models. Then, we propose a new multi-task learning framework called phylogenetic learning which learns models for related tasks/languages guided by the tasks/languages evolutionary tree. Eventually, with our new measure of morphosyntactic complexity we investigate the intrinsic role of morphological information for dependency parsing
Anoop, C. S. "Automatic speech recognition for low-resource Indian languages." Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6195.
Повний текст джерелаКниги з теми "Multi-lingual training"
Chatrik, Balbir. The obstacle course: Experiences of multi-lingual trainees on youth training and employment training. London: Youthaid, 1992.
Знайти повний текст джерелаЧастини книг з теми "Multi-lingual training"
Landon-Smith, Kristine, and Chris Hay. "Empowering the somatically othered actor through multi-lingual improvisation in training." In Stages of Reckoning, 149–63. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003032076-12.
Повний текст джерелаNouza, Jan, and Radek Safarik. "Parliament Archives Used for Automatic Training of Multi-lingual Automatic Speech Recognition Systems." In Text, Speech, and Dialogue, 174–82. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64206-2_20.
Повний текст джерелаHagans, Kristi S., and Catherine Richards-Tutor. "Interdisciplinary Training in Intensive Intervention for Students With Disabilities and Multi-Lingual Youth." In Handbook of Research on Interdisciplinary Preparation for Equitable Special Education, 296–317. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6438-0.ch015.
Повний текст джерелаTsurutani, Chiharu. "Computer-Assisted Pronunciation Training and Assessment (CAPTA) Programs." In Computer-Assisted Foreign Language Teaching and Learning, 276–88. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2821-2.ch016.
Повний текст джерелаТези доповідей конференцій з теми "Multi-lingual training"
Li, Shicheng, Pengcheng Yang, Fuli Luo, and Jun Xie. "Multi-Granularity Contrasting for Cross-Lingual Pre-Training." In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-acl.149.
Повний текст джерелаQin, Libo, Minheng Ni, Yue Zhang, and Wanxiang Che. "CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot Cross-Lingual NLP." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/533.
Повний текст джерелаSoky, Kak, Sheng Li, Tatsuya Kawahara, and Sopheap Seng. "Multi-lingual Transformer Training for Khmer Automatic Speech Recognition." In 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2019. http://dx.doi.org/10.1109/apsipaasc47483.2019.9023137.
Повний текст джерелаSaiko, Masahiro, Hitoshi Yamamoto, Ryosuke Isotani, and Chiori Hori. "Efficient multi-lingual unsupervised acoustic model training under mismatch conditions." In 2014 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2014. http://dx.doi.org/10.1109/slt.2014.7078544.
Повний текст джерелаGessler, Luke, and Amir Zeldes. "MicroBERT: Effective Training of Low-resource Monolingual BERTs through Parameter Reduction and Multitask Learning." In Proceedings of the The 2nd Workshop on Multi-lingual Representation Learning (MRL). Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.mrl-1.9.
Повний текст джерелаConceição, Jhonatas Santos de Jesus, Allan Pinto, Luis Decker, Jose Luis Flores Campana, Manuel Cordova Neira, Andrezza A. Dos Santos, Helio Pedrini, and Ricardo Torres. "Multi-Lingual Text Localization via Language-Specific Convolutional Neural Networks." In XXXII Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sibgrapi.est.2019.8333.
Повний текст джерелаHe, Xiaodong, Li Deng, Dilek Hakkani-Tur, and Gokhan Tur. "Multi-style adaptive training for robust cross-lingual spoken language understanding." In ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6639292.
Повний текст джерелаMasumura, Ryo, Yusuke Shinohara, Ryuichiro Higashinaka, and Yushi Aono. "Adversarial Training for Multi-task and Multi-lingual Joint Modeling of Utterance Intent Classification." In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1064.
Повний текст джерелаLai, Siyu, Hui Huang, Dong Jing, Yufeng Chen, Jinan Xu, and Jian Liu. "Saliency-based Multi-View Mixed Language Training for Zero-shot Cross-lingual Classification." In Findings of the Association for Computational Linguistics: EMNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-emnlp.55.
Повний текст джерелаBarry, James, Joachim Wagner, and Jennifer Foster. "Cross-lingual Parsing with Polyglot Training and Multi-treebank Learning: A Faroese Case Study." In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-6118.
Повний текст джерела