Artigos de revistas sobre o tema "Pre-training corpora"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Pre-training corpora".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
Sun, Yu, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu e Haifeng Wang. "ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8968–75. http://dx.doi.org/10.1609/aaai.v34i05.6428.
Texto completo da fonteMoodaley, Wayne, e Arnesh Telukdarie. "A Conceptual Framework for Subdomain Specific Pre-Training of Large Language Models for Green Claim Detection". European Journal of Sustainable Development 12, n.º 4 (1 de outubro de 2023): 319. http://dx.doi.org/10.14207/ejsd.2023.v12n4p319.
Texto completo da fonteLiu, Yinhan, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis e Luke Zettlemoyer. "Multilingual Denoising Pre-training for Neural Machine Translation". Transactions of the Association for Computational Linguistics 8 (novembro de 2020): 726–42. http://dx.doi.org/10.1162/tacl_a_00343.
Texto completo da fonteDean, Roger Thornton, e Marcus Thomas Pearce. "Algorithmically-generated Corpora that use Serial Compositional Principles Can Contribute to the Modeling of Sequential Pitch Structure in Non-tonal Music". Empirical Musicology Review 11, n.º 1 (8 de julho de 2016): 27. http://dx.doi.org/10.18061/emr.v11i1.4900.
Texto completo da fonteYuan, Sha, Hanyu Zhao, Zhengxiao Du, Ming Ding, Xiao Liu, Yukuo Cen, Xu Zou, Zhilin Yang e Jie Tang. "WuDaoCorpora: A super large-scale Chinese corpora for pre-training language models". AI Open 2 (2021): 65–68. http://dx.doi.org/10.1016/j.aiopen.2021.06.001.
Texto completo da fonteKreutzer, Julia, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo et al. "Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets". Transactions of the Association for Computational Linguistics 10 (2022): 50–72. http://dx.doi.org/10.1162/tacl_a_00447.
Texto completo da fonteQian, Jing, Yong Yue, Katie Atkinson e Gangmin Li. "Understanding Chinese Moral Stories with Further Pre-Training". International Journal on Natural Language Computing 12, n.º 2 (29 de abril de 2023): 01–12. http://dx.doi.org/10.5121/ijnlc.2023.12201.
Texto completo da fonteJiang, Xiaoze, Yaobo Liang, Weizhu Chen e Nan Duan. "XLM-K: Improving Cross-Lingual Language Model Pre-training with Multilingual Knowledge". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junho de 2022): 10840–48. http://dx.doi.org/10.1609/aaai.v36i10.21330.
Texto completo da fonteKajiwara, Tomoyuki, Biwa Miura e Yuki Arase. "Monolingual Transfer Learning via Bilingual Translators for Style-Sensitive Paraphrase Generation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8042–49. http://dx.doi.org/10.1609/aaai.v34i05.6314.
Texto completo da fonteKryeziu, Labehat, e Visar Shehu. "Pre-Training MLM Using Bert for the Albanian Language". SEEU Review 18, n.º 1 (1 de junho de 2023): 52–62. http://dx.doi.org/10.2478/seeur-2023-0035.
Texto completo da fonteShi, Peng, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos Santos e Bing Xiang. "Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 15 (18 de maio de 2021): 13806–14. http://dx.doi.org/10.1609/aaai.v35i15.17627.
Texto completo da fonteAlruwaili, Awatif. "An online training course on the use of corpora for teachers in public schools". JALT CALL Journal 19, n.º 1 (abril de 2023): 53–70. http://dx.doi.org/10.29140/jaltcall.v19n1.675.
Texto completo da fonteLuo, Da, Yanglei Gan, Rui Hou, Run Lin, Qiao Liu, Yuxiang Cai e Wannian Gao. "Synergistic Anchored Contrastive Pre-training for Few-Shot Relation Extraction". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de março de 2024): 18742–50. http://dx.doi.org/10.1609/aaai.v38i17.29838.
Texto completo da fonteLi, Zhen, Dan Qu, Chaojie Xie, Wenlin Zhang e Yanxia Li. "Language Model Pre-training Method in Machine Translation Based on Named Entity Recognition". International Journal on Artificial Intelligence Tools 29, n.º 07n08 (30 de novembro de 2020): 2040021. http://dx.doi.org/10.1142/s0218213020400217.
Texto completo da fonteLiu, Peng, Lemei Zhang e Jon Atle Gulla. "Pre-train, Prompt, and Recommendation: A Comprehensive Survey of Language Modeling Paradigm Adaptations in Recommender Systems". Transactions of the Association for Computational Linguistics 11 (2023): 1553–71. http://dx.doi.org/10.1162/tacl_a_00619.
Texto completo da fonteMaruyama, Takumi, e Kazuhide Yamamoto. "Extremely Low-Resource Text Simplification with Pre-trained Transformer Language Model". International Journal of Asian Language Processing 30, n.º 01 (março de 2020): 2050001. http://dx.doi.org/10.1142/s2717554520500010.
Texto completo da fonteZheng, Yinhe, Rongsheng Zhang, Minlie Huang e Xiaoxi Mao. "A Pre-Training Based Personalized Dialogue Generation Model with Persona-Sparse Data". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 9693–700. http://dx.doi.org/10.1609/aaai.v34i05.6518.
Texto completo da fonteMao, Zhuoyuan, Chenhui Chu e Sadao Kurohashi. "Linguistically Driven Multi-Task Pre-Training for Low-Resource Neural Machine Translation". ACM Transactions on Asian and Low-Resource Language Information Processing 21, n.º 4 (31 de julho de 2022): 1–29. http://dx.doi.org/10.1145/3491065.
Texto completo da fonteAi, Xi, e Bin Fang. "Empirical Regularization for Synthetic Sentence Pairs in Unsupervised Neural Machine Translation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 14 (18 de maio de 2021): 12471–79. http://dx.doi.org/10.1609/aaai.v35i14.17479.
Texto completo da fonteFromont, Robert, e Kevin Watson. "Factors influencing automatic segmental alignment of sociophonetic corpora". Corpora 11, n.º 3 (novembro de 2016): 401–31. http://dx.doi.org/10.3366/cor.2016.0101.
Texto completo da fonteZhu, Quan, Xiaoyin Wang, Xuan Liu, Wanru Du e Xingxing Ding. "Multi-task learning for aspect level semantic classification combining complex aspect target semantic enhancement and adaptive local focus". Mathematical Biosciences and Engineering 20, n.º 10 (2023): 18566–91. http://dx.doi.org/10.3934/mbe.2023824.
Texto completo da fonteSiddhant, Aditya, Anuj Goyal e Angeliki Metallinou. "Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 4959–66. http://dx.doi.org/10.1609/aaai.v33i01.33014959.
Texto completo da fonteGao, Yunfan, Yun Xiong, Siqi Wang e Haofen Wang. "GeoBERT: Pre-Training Geospatial Representation Learning on Point-of-Interest". Applied Sciences 12, n.º 24 (16 de dezembro de 2022): 12942. http://dx.doi.org/10.3390/app122412942.
Texto completo da fonteChiang, Cheng-Han, e Hung-yi Lee. "On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junho de 2022): 10518–25. http://dx.doi.org/10.1609/aaai.v36i10.21295.
Texto completo da fonteLi, Yucheng, Frank Guerin e Chenghua Lin. "LatestEval: Addressing Data Contamination in Language Model Evaluation through Dynamic and Time-Sensitive Test Construction". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de março de 2024): 18600–18607. http://dx.doi.org/10.1609/aaai.v38i17.29822.
Texto completo da fonteKarimzadeh, Morteza, e Alan MacEachren. "GeoAnnotator: A Collaborative Semi-Automatic Platform for Constructing Geo-Annotated Text Corpora". ISPRS International Journal of Geo-Information 8, n.º 4 (27 de março de 2019): 161. http://dx.doi.org/10.3390/ijgi8040161.
Texto completo da fonteBae, Jae Kwon. "A Study on Application of the Artificial Intelligence-Based Pre-trained Language Model". Academic Society of Global Business Administration 21, n.º 2 (30 de abril de 2024): 64–83. http://dx.doi.org/10.38115/asgba.2024.21.2.64.
Texto completo da fonteFang, Liuqin, Qing Ma e Jiahao Yan. "The effectiveness of corpus-based training on collocation use in L2 writing for Chinese senior secondary school students". Journal of China Computer-Assisted Language Learning 1, n.º 1 (1 de agosto de 2021): 80–109. http://dx.doi.org/10.1515/jccall-2021-2004.
Texto completo da fonteKang, Yu, Tianqiao Liu, Hang Li, Yang Hao e Wenbiao Ding. "Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junho de 2022): 10875–83. http://dx.doi.org/10.1609/aaai.v36i10.21334.
Texto completo da fonteHe, Wanwei, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang et al. "GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-supervised Learning and Explicit Policy Injection". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junho de 2022): 10749–57. http://dx.doi.org/10.1609/aaai.v36i10.21320.
Texto completo da fonteGarrido-Muñoz , Ismael, Arturo Montejo-Ráez , Fernando Martínez-Santiago e L. Alfonso Ureña-López . "A Survey on Bias in Deep NLP". Applied Sciences 11, n.º 7 (2 de abril de 2021): 3184. http://dx.doi.org/10.3390/app11073184.
Texto completo da fontePerkowski, Ernest, Rui Pan, Tuan Dung Nguyen, Yuan-Sen Ting, Sandor Kruk, Tong Zhang, Charlie O’Neill et al. "AstroLLaMA-Chat: Scaling AstroLLaMA with Conversational and Diverse Datasets". Research Notes of the AAS 8, n.º 1 (8 de janeiro de 2024): 7. http://dx.doi.org/10.3847/2515-5172/ad1abe.
Texto completo da fonteWang, Ke, Xiutian Zhao e Wei Peng. "Learning from Failure: Improving Meeting Summarization without Good Samples". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de março de 2024): 19153–61. http://dx.doi.org/10.1609/aaai.v38i17.29883.
Texto completo da fontePota, Marco, Mirko Ventura, Rosario Catelli e Massimo Esposito. "An Effective BERT-Based Pipeline for Twitter Sentiment Analysis: A Case Study in Italian". Sensors 21, n.º 1 (28 de dezembro de 2020): 133. http://dx.doi.org/10.3390/s21010133.
Texto completo da fonteGonzález-Docasal, Ander, e Aitor Álvarez. "Enhancing Voice Cloning Quality through Data Selection and Alignment-Based Metrics". Applied Sciences 13, n.º 14 (10 de julho de 2023): 8049. http://dx.doi.org/10.3390/app13148049.
Texto completo da fonteVu, Dang Thanh, Gwanghyun Yu, Chilwoo Lee e Jinyoung Kim. "Text Data Augmentation for the Korean Language". Applied Sciences 12, n.º 7 (28 de março de 2022): 3425. http://dx.doi.org/10.3390/app12073425.
Texto completo da fonteQi, Kunxun, e Jianfeng Du. "Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8632–39. http://dx.doi.org/10.1609/aaai.v34i05.6387.
Texto completo da fonteA. Brenes, Jose, Javier Ferrández-Pastor, José M. Cámara-Zapata e Gabriela Marín-Raventós. "Use of Hough Transform and Homography for the Creation of Image Corpora for Smart Agriculture". International Journal on Cybernetics & Informatics 12, n.º 6 (7 de outubro de 2023): 09–19. http://dx.doi.org/10.5121/ijci.2023.120602.
Texto completo da fonteYang, Tiancheng, Ilia Sucholutsky, Kuang-Yu Jen e Matthias Schonlau. "exKidneyBERT: a language model for kidney transplant pathology reports and the crucial role of extended vocabularies". PeerJ Computer Science 10 (28 de fevereiro de 2024): e1888. http://dx.doi.org/10.7717/peerj-cs.1888.
Texto completo da fonteLi, Lei, Yongfeng Zhang e Li Chen. "Personalized Prompt Learning for Explainable Recommendation". ACM Transactions on Information Systems 41, n.º 4 (23 de março de 2023): 1–26. http://dx.doi.org/10.1145/3580488.
Texto completo da fontePanboonyuen, Teerapong, Kulsawasd Jitkajornwanich, Siam Lawawirojwong, Panu Srestasathiern e Peerapon Vateekul. "Semantic Segmentation on Remotely Sensed Images Using an Enhanced Global Convolutional Network with Channel Attention and Domain Specific Transfer Learning". Remote Sensing 11, n.º 1 (4 de janeiro de 2019): 83. http://dx.doi.org/10.3390/rs11010083.
Texto completo da fontePanboonyuen, Teerapong, Kulsawasd Jitkajornwanich, Siam Lawawirojwong, Panu Srestasathiern e Peerapon Vateekul. "Transformer-Based Decoder Designs for Semantic Segmentation on Remotely Sensed Images". Remote Sensing 13, n.º 24 (15 de dezembro de 2021): 5100. http://dx.doi.org/10.3390/rs13245100.
Texto completo da fonteLiu, Rui, e Barzan Mozafari. "Transformer with Memory Replay". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junho de 2022): 7567–75. http://dx.doi.org/10.1609/aaai.v36i7.20722.
Texto completo da fonteLiu, Weijie, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng e Ping Wang. "K-BERT: Enabling Language Representation with Knowledge Graph". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 03 (3 de abril de 2020): 2901–8. http://dx.doi.org/10.1609/aaai.v34i03.5681.
Texto completo da fontePeng, Baolin, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden e Jianfeng Gao. "Soloist: BuildingTask Bots at Scale with Transfer Learning and Machine Teaching". Transactions of the Association for Computational Linguistics 9 (2021): 807–24. http://dx.doi.org/10.1162/tacl_a_00399.
Texto completo da fontePalagin, O. V., V. Yu Velychko, K. S. Malakhov e O. S. Shchurov. "Distributional semantic modeling: a revised technique to train term/word vector space models applying the ontology-related approach". PROBLEMS IN PROGRAMMING, n.º 2-3 (setembro de 2020): 341–51. http://dx.doi.org/10.15407/pp2020.02-03.341.
Texto completo da fonteChoi, Yong-Seok, Yo-Han Park, Seung Yun, Sang-Hun Kim e Kong-Joo Lee. "Factors Behind the Effectiveness of an Unsupervised Neural Machine Translation System between Korean and Japanese". Applied Sciences 11, n.º 16 (21 de agosto de 2021): 7662. http://dx.doi.org/10.3390/app11167662.
Texto completo da fonteZayed, Abdelrahman, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian e Sarath Chandar. "Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 12 (26 de junho de 2023): 14593–601. http://dx.doi.org/10.1609/aaai.v37i12.26706.
Texto completo da fonteKeung, Phillip, Julian Salazar, Yichao Lu e Noah A. Smith. "Unsupervised Bitext Mining and Translation via Self-Trained Contextual Embeddings". Transactions of the Association for Computational Linguistics 8 (dezembro de 2020): 828–41. http://dx.doi.org/10.1162/tacl_a_00348.
Texto completo da fonteLaucis, Rolands, e Gints Jēkabsons. "Evaluation of Word Embedding Models in Latvian NLP Tasks Based on Publicly Available Corpora". Applied Computer Systems 26, n.º 2 (1 de dezembro de 2021): 132–38. http://dx.doi.org/10.2478/acss-2021-0016.
Texto completo da fonte