Artículos de revistas sobre el tema "Pre-training corpora"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Pre-training corpora".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Sun, Yu, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu y Haifeng Wang. "ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8968–75. http://dx.doi.org/10.1609/aaai.v34i05.6428.
Texto completoMoodaley, Wayne y Arnesh Telukdarie. "A Conceptual Framework for Subdomain Specific Pre-Training of Large Language Models for Green Claim Detection". European Journal of Sustainable Development 12, n.º 4 (1 de octubre de 2023): 319. http://dx.doi.org/10.14207/ejsd.2023.v12n4p319.
Texto completoLiu, Yinhan, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis y Luke Zettlemoyer. "Multilingual Denoising Pre-training for Neural Machine Translation". Transactions of the Association for Computational Linguistics 8 (noviembre de 2020): 726–42. http://dx.doi.org/10.1162/tacl_a_00343.
Texto completoDean, Roger Thornton y Marcus Thomas Pearce. "Algorithmically-generated Corpora that use Serial Compositional Principles Can Contribute to the Modeling of Sequential Pitch Structure in Non-tonal Music". Empirical Musicology Review 11, n.º 1 (8 de julio de 2016): 27. http://dx.doi.org/10.18061/emr.v11i1.4900.
Texto completoYuan, Sha, Hanyu Zhao, Zhengxiao Du, Ming Ding, Xiao Liu, Yukuo Cen, Xu Zou, Zhilin Yang y Jie Tang. "WuDaoCorpora: A super large-scale Chinese corpora for pre-training language models". AI Open 2 (2021): 65–68. http://dx.doi.org/10.1016/j.aiopen.2021.06.001.
Texto completoKreutzer, Julia, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo et al. "Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets". Transactions of the Association for Computational Linguistics 10 (2022): 50–72. http://dx.doi.org/10.1162/tacl_a_00447.
Texto completoQian, Jing, Yong Yue, Katie Atkinson y Gangmin Li. "Understanding Chinese Moral Stories with Further Pre-Training". International Journal on Natural Language Computing 12, n.º 2 (29 de abril de 2023): 01–12. http://dx.doi.org/10.5121/ijnlc.2023.12201.
Texto completoJiang, Xiaoze, Yaobo Liang, Weizhu Chen y Nan Duan. "XLM-K: Improving Cross-Lingual Language Model Pre-training with Multilingual Knowledge". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 10840–48. http://dx.doi.org/10.1609/aaai.v36i10.21330.
Texto completoKajiwara, Tomoyuki, Biwa Miura y Yuki Arase. "Monolingual Transfer Learning via Bilingual Translators for Style-Sensitive Paraphrase Generation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8042–49. http://dx.doi.org/10.1609/aaai.v34i05.6314.
Texto completoKryeziu, Labehat y Visar Shehu. "Pre-Training MLM Using Bert for the Albanian Language". SEEU Review 18, n.º 1 (1 de junio de 2023): 52–62. http://dx.doi.org/10.2478/seeur-2023-0035.
Texto completoShi, Peng, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos Santos y Bing Xiang. "Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 15 (18 de mayo de 2021): 13806–14. http://dx.doi.org/10.1609/aaai.v35i15.17627.
Texto completoAlruwaili, Awatif. "An online training course on the use of corpora for teachers in public schools". JALT CALL Journal 19, n.º 1 (abril de 2023): 53–70. http://dx.doi.org/10.29140/jaltcall.v19n1.675.
Texto completoLuo, Da, Yanglei Gan, Rui Hou, Run Lin, Qiao Liu, Yuxiang Cai y Wannian Gao. "Synergistic Anchored Contrastive Pre-training for Few-Shot Relation Extraction". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de marzo de 2024): 18742–50. http://dx.doi.org/10.1609/aaai.v38i17.29838.
Texto completoLi, Zhen, Dan Qu, Chaojie Xie, Wenlin Zhang y Yanxia Li. "Language Model Pre-training Method in Machine Translation Based on Named Entity Recognition". International Journal on Artificial Intelligence Tools 29, n.º 07n08 (30 de noviembre de 2020): 2040021. http://dx.doi.org/10.1142/s0218213020400217.
Texto completoLiu, Peng, Lemei Zhang y Jon Atle Gulla. "Pre-train, Prompt, and Recommendation: A Comprehensive Survey of Language Modeling Paradigm Adaptations in Recommender Systems". Transactions of the Association for Computational Linguistics 11 (2023): 1553–71. http://dx.doi.org/10.1162/tacl_a_00619.
Texto completoMaruyama, Takumi y Kazuhide Yamamoto. "Extremely Low-Resource Text Simplification with Pre-trained Transformer Language Model". International Journal of Asian Language Processing 30, n.º 01 (marzo de 2020): 2050001. http://dx.doi.org/10.1142/s2717554520500010.
Texto completoZheng, Yinhe, Rongsheng Zhang, Minlie Huang y Xiaoxi Mao. "A Pre-Training Based Personalized Dialogue Generation Model with Persona-Sparse Data". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 9693–700. http://dx.doi.org/10.1609/aaai.v34i05.6518.
Texto completoMao, Zhuoyuan, Chenhui Chu y Sadao Kurohashi. "Linguistically Driven Multi-Task Pre-Training for Low-Resource Neural Machine Translation". ACM Transactions on Asian and Low-Resource Language Information Processing 21, n.º 4 (31 de julio de 2022): 1–29. http://dx.doi.org/10.1145/3491065.
Texto completoAi, Xi y Bin Fang. "Empirical Regularization for Synthetic Sentence Pairs in Unsupervised Neural Machine Translation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 14 (18 de mayo de 2021): 12471–79. http://dx.doi.org/10.1609/aaai.v35i14.17479.
Texto completoFromont, Robert y Kevin Watson. "Factors influencing automatic segmental alignment of sociophonetic corpora". Corpora 11, n.º 3 (noviembre de 2016): 401–31. http://dx.doi.org/10.3366/cor.2016.0101.
Texto completoZhu, Quan, Xiaoyin Wang, Xuan Liu, Wanru Du y Xingxing Ding. "Multi-task learning for aspect level semantic classification combining complex aspect target semantic enhancement and adaptive local focus". Mathematical Biosciences and Engineering 20, n.º 10 (2023): 18566–91. http://dx.doi.org/10.3934/mbe.2023824.
Texto completoSiddhant, Aditya, Anuj Goyal y Angeliki Metallinou. "Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 4959–66. http://dx.doi.org/10.1609/aaai.v33i01.33014959.
Texto completoGao, Yunfan, Yun Xiong, Siqi Wang y Haofen Wang. "GeoBERT: Pre-Training Geospatial Representation Learning on Point-of-Interest". Applied Sciences 12, n.º 24 (16 de diciembre de 2022): 12942. http://dx.doi.org/10.3390/app122412942.
Texto completoChiang, Cheng-Han y Hung-yi Lee. "On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 10518–25. http://dx.doi.org/10.1609/aaai.v36i10.21295.
Texto completoLi, Yucheng, Frank Guerin y Chenghua Lin. "LatestEval: Addressing Data Contamination in Language Model Evaluation through Dynamic and Time-Sensitive Test Construction". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de marzo de 2024): 18600–18607. http://dx.doi.org/10.1609/aaai.v38i17.29822.
Texto completoKarimzadeh, Morteza y Alan MacEachren. "GeoAnnotator: A Collaborative Semi-Automatic Platform for Constructing Geo-Annotated Text Corpora". ISPRS International Journal of Geo-Information 8, n.º 4 (27 de marzo de 2019): 161. http://dx.doi.org/10.3390/ijgi8040161.
Texto completoBae, Jae Kwon. "A Study on Application of the Artificial Intelligence-Based Pre-trained Language Model". Academic Society of Global Business Administration 21, n.º 2 (30 de abril de 2024): 64–83. http://dx.doi.org/10.38115/asgba.2024.21.2.64.
Texto completoFang, Liuqin, Qing Ma y Jiahao Yan. "The effectiveness of corpus-based training on collocation use in L2 writing for Chinese senior secondary school students". Journal of China Computer-Assisted Language Learning 1, n.º 1 (1 de agosto de 2021): 80–109. http://dx.doi.org/10.1515/jccall-2021-2004.
Texto completoKang, Yu, Tianqiao Liu, Hang Li, Yang Hao y Wenbiao Ding. "Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 10875–83. http://dx.doi.org/10.1609/aaai.v36i10.21334.
Texto completoHe, Wanwei, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang et al. "GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-supervised Learning and Explicit Policy Injection". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 10749–57. http://dx.doi.org/10.1609/aaai.v36i10.21320.
Texto completoGarrido-Muñoz , Ismael, Arturo Montejo-Ráez , Fernando Martínez-Santiago y L. Alfonso Ureña-López . "A Survey on Bias in Deep NLP". Applied Sciences 11, n.º 7 (2 de abril de 2021): 3184. http://dx.doi.org/10.3390/app11073184.
Texto completoPerkowski, Ernest, Rui Pan, Tuan Dung Nguyen, Yuan-Sen Ting, Sandor Kruk, Tong Zhang, Charlie O’Neill et al. "AstroLLaMA-Chat: Scaling AstroLLaMA with Conversational and Diverse Datasets". Research Notes of the AAS 8, n.º 1 (8 de enero de 2024): 7. http://dx.doi.org/10.3847/2515-5172/ad1abe.
Texto completoWang, Ke, Xiutian Zhao y Wei Peng. "Learning from Failure: Improving Meeting Summarization without Good Samples". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de marzo de 2024): 19153–61. http://dx.doi.org/10.1609/aaai.v38i17.29883.
Texto completoPota, Marco, Mirko Ventura, Rosario Catelli y Massimo Esposito. "An Effective BERT-Based Pipeline for Twitter Sentiment Analysis: A Case Study in Italian". Sensors 21, n.º 1 (28 de diciembre de 2020): 133. http://dx.doi.org/10.3390/s21010133.
Texto completoGonzález-Docasal, Ander y Aitor Álvarez. "Enhancing Voice Cloning Quality through Data Selection and Alignment-Based Metrics". Applied Sciences 13, n.º 14 (10 de julio de 2023): 8049. http://dx.doi.org/10.3390/app13148049.
Texto completoVu, Dang Thanh, Gwanghyun Yu, Chilwoo Lee y Jinyoung Kim. "Text Data Augmentation for the Korean Language". Applied Sciences 12, n.º 7 (28 de marzo de 2022): 3425. http://dx.doi.org/10.3390/app12073425.
Texto completoQi, Kunxun y Jianfeng Du. "Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8632–39. http://dx.doi.org/10.1609/aaai.v34i05.6387.
Texto completoA. Brenes, Jose, Javier Ferrández-Pastor, José M. Cámara-Zapata y Gabriela Marín-Raventós. "Use of Hough Transform and Homography for the Creation of Image Corpora for Smart Agriculture". International Journal on Cybernetics & Informatics 12, n.º 6 (7 de octubre de 2023): 09–19. http://dx.doi.org/10.5121/ijci.2023.120602.
Texto completoYang, Tiancheng, Ilia Sucholutsky, Kuang-Yu Jen y Matthias Schonlau. "exKidneyBERT: a language model for kidney transplant pathology reports and the crucial role of extended vocabularies". PeerJ Computer Science 10 (28 de febrero de 2024): e1888. http://dx.doi.org/10.7717/peerj-cs.1888.
Texto completoLi, Lei, Yongfeng Zhang y Li Chen. "Personalized Prompt Learning for Explainable Recommendation". ACM Transactions on Information Systems 41, n.º 4 (23 de marzo de 2023): 1–26. http://dx.doi.org/10.1145/3580488.
Texto completoPanboonyuen, Teerapong, Kulsawasd Jitkajornwanich, Siam Lawawirojwong, Panu Srestasathiern y Peerapon Vateekul. "Semantic Segmentation on Remotely Sensed Images Using an Enhanced Global Convolutional Network with Channel Attention and Domain Specific Transfer Learning". Remote Sensing 11, n.º 1 (4 de enero de 2019): 83. http://dx.doi.org/10.3390/rs11010083.
Texto completoPanboonyuen, Teerapong, Kulsawasd Jitkajornwanich, Siam Lawawirojwong, Panu Srestasathiern y Peerapon Vateekul. "Transformer-Based Decoder Designs for Semantic Segmentation on Remotely Sensed Images". Remote Sensing 13, n.º 24 (15 de diciembre de 2021): 5100. http://dx.doi.org/10.3390/rs13245100.
Texto completoLiu, Rui y Barzan Mozafari. "Transformer with Memory Replay". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junio de 2022): 7567–75. http://dx.doi.org/10.1609/aaai.v36i7.20722.
Texto completoLiu, Weijie, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng y Ping Wang. "K-BERT: Enabling Language Representation with Knowledge Graph". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 03 (3 de abril de 2020): 2901–8. http://dx.doi.org/10.1609/aaai.v34i03.5681.
Texto completoPeng, Baolin, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden y Jianfeng Gao. "Soloist: BuildingTask Bots at Scale with Transfer Learning and Machine Teaching". Transactions of the Association for Computational Linguistics 9 (2021): 807–24. http://dx.doi.org/10.1162/tacl_a_00399.
Texto completoPalagin, O. V., V. Yu Velychko, K. S. Malakhov y O. S. Shchurov. "Distributional semantic modeling: a revised technique to train term/word vector space models applying the ontology-related approach". PROBLEMS IN PROGRAMMING, n.º 2-3 (septiembre de 2020): 341–51. http://dx.doi.org/10.15407/pp2020.02-03.341.
Texto completoChoi, Yong-Seok, Yo-Han Park, Seung Yun, Sang-Hun Kim y Kong-Joo Lee. "Factors Behind the Effectiveness of an Unsupervised Neural Machine Translation System between Korean and Japanese". Applied Sciences 11, n.º 16 (21 de agosto de 2021): 7662. http://dx.doi.org/10.3390/app11167662.
Texto completoZayed, Abdelrahman, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian y Sarath Chandar. "Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 12 (26 de junio de 2023): 14593–601. http://dx.doi.org/10.1609/aaai.v37i12.26706.
Texto completoKeung, Phillip, Julian Salazar, Yichao Lu y Noah A. Smith. "Unsupervised Bitext Mining and Translation via Self-Trained Contextual Embeddings". Transactions of the Association for Computational Linguistics 8 (diciembre de 2020): 828–41. http://dx.doi.org/10.1162/tacl_a_00348.
Texto completoLaucis, Rolands y Gints Jēkabsons. "Evaluation of Word Embedding Models in Latvian NLP Tasks Based on Publicly Available Corpora". Applied Computer Systems 26, n.º 2 (1 de diciembre de 2021): 132–38. http://dx.doi.org/10.2478/acss-2021-0016.
Texto completo