Artykuły w czasopismach na temat „Pre-training corpora”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Pre-training corpora”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Sun, Yu, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu i Haifeng Wang. "ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 05 (3.04.2020): 8968–75. http://dx.doi.org/10.1609/aaai.v34i05.6428.
Pełny tekst źródłaMoodaley, Wayne, i Arnesh Telukdarie. "A Conceptual Framework for Subdomain Specific Pre-Training of Large Language Models for Green Claim Detection". European Journal of Sustainable Development 12, nr 4 (1.10.2023): 319. http://dx.doi.org/10.14207/ejsd.2023.v12n4p319.
Pełny tekst źródłaLiu, Yinhan, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis i Luke Zettlemoyer. "Multilingual Denoising Pre-training for Neural Machine Translation". Transactions of the Association for Computational Linguistics 8 (listopad 2020): 726–42. http://dx.doi.org/10.1162/tacl_a_00343.
Pełny tekst źródłaDean, Roger Thornton, i Marcus Thomas Pearce. "Algorithmically-generated Corpora that use Serial Compositional Principles Can Contribute to the Modeling of Sequential Pitch Structure in Non-tonal Music". Empirical Musicology Review 11, nr 1 (8.07.2016): 27. http://dx.doi.org/10.18061/emr.v11i1.4900.
Pełny tekst źródłaYuan, Sha, Hanyu Zhao, Zhengxiao Du, Ming Ding, Xiao Liu, Yukuo Cen, Xu Zou, Zhilin Yang i Jie Tang. "WuDaoCorpora: A super large-scale Chinese corpora for pre-training language models". AI Open 2 (2021): 65–68. http://dx.doi.org/10.1016/j.aiopen.2021.06.001.
Pełny tekst źródłaKreutzer, Julia, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo i in. "Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets". Transactions of the Association for Computational Linguistics 10 (2022): 50–72. http://dx.doi.org/10.1162/tacl_a_00447.
Pełny tekst źródłaQian, Jing, Yong Yue, Katie Atkinson i Gangmin Li. "Understanding Chinese Moral Stories with Further Pre-Training". International Journal on Natural Language Computing 12, nr 2 (29.04.2023): 01–12. http://dx.doi.org/10.5121/ijnlc.2023.12201.
Pełny tekst źródłaJiang, Xiaoze, Yaobo Liang, Weizhu Chen i Nan Duan. "XLM-K: Improving Cross-Lingual Language Model Pre-training with Multilingual Knowledge". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 10 (28.06.2022): 10840–48. http://dx.doi.org/10.1609/aaai.v36i10.21330.
Pełny tekst źródłaKajiwara, Tomoyuki, Biwa Miura i Yuki Arase. "Monolingual Transfer Learning via Bilingual Translators for Style-Sensitive Paraphrase Generation". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 05 (3.04.2020): 8042–49. http://dx.doi.org/10.1609/aaai.v34i05.6314.
Pełny tekst źródłaKryeziu, Labehat, i Visar Shehu. "Pre-Training MLM Using Bert for the Albanian Language". SEEU Review 18, nr 1 (1.06.2023): 52–62. http://dx.doi.org/10.2478/seeur-2023-0035.
Pełny tekst źródłaShi, Peng, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos Santos i Bing Xiang. "Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 15 (18.05.2021): 13806–14. http://dx.doi.org/10.1609/aaai.v35i15.17627.
Pełny tekst źródłaAlruwaili, Awatif. "An online training course on the use of corpora for teachers in public schools". JALT CALL Journal 19, nr 1 (kwiecień 2023): 53–70. http://dx.doi.org/10.29140/jaltcall.v19n1.675.
Pełny tekst źródłaLuo, Da, Yanglei Gan, Rui Hou, Run Lin, Qiao Liu, Yuxiang Cai i Wannian Gao. "Synergistic Anchored Contrastive Pre-training for Few-Shot Relation Extraction". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 17 (24.03.2024): 18742–50. http://dx.doi.org/10.1609/aaai.v38i17.29838.
Pełny tekst źródłaLi, Zhen, Dan Qu, Chaojie Xie, Wenlin Zhang i Yanxia Li. "Language Model Pre-training Method in Machine Translation Based on Named Entity Recognition". International Journal on Artificial Intelligence Tools 29, nr 07n08 (30.11.2020): 2040021. http://dx.doi.org/10.1142/s0218213020400217.
Pełny tekst źródłaLiu, Peng, Lemei Zhang i Jon Atle Gulla. "Pre-train, Prompt, and Recommendation: A Comprehensive Survey of Language Modeling Paradigm Adaptations in Recommender Systems". Transactions of the Association for Computational Linguistics 11 (2023): 1553–71. http://dx.doi.org/10.1162/tacl_a_00619.
Pełny tekst źródłaMaruyama, Takumi, i Kazuhide Yamamoto. "Extremely Low-Resource Text Simplification with Pre-trained Transformer Language Model". International Journal of Asian Language Processing 30, nr 01 (marzec 2020): 2050001. http://dx.doi.org/10.1142/s2717554520500010.
Pełny tekst źródłaZheng, Yinhe, Rongsheng Zhang, Minlie Huang i Xiaoxi Mao. "A Pre-Training Based Personalized Dialogue Generation Model with Persona-Sparse Data". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 05 (3.04.2020): 9693–700. http://dx.doi.org/10.1609/aaai.v34i05.6518.
Pełny tekst źródłaMao, Zhuoyuan, Chenhui Chu i Sadao Kurohashi. "Linguistically Driven Multi-Task Pre-Training for Low-Resource Neural Machine Translation". ACM Transactions on Asian and Low-Resource Language Information Processing 21, nr 4 (31.07.2022): 1–29. http://dx.doi.org/10.1145/3491065.
Pełny tekst źródłaAi, Xi, i Bin Fang. "Empirical Regularization for Synthetic Sentence Pairs in Unsupervised Neural Machine Translation". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 14 (18.05.2021): 12471–79. http://dx.doi.org/10.1609/aaai.v35i14.17479.
Pełny tekst źródłaFromont, Robert, i Kevin Watson. "Factors influencing automatic segmental alignment of sociophonetic corpora". Corpora 11, nr 3 (listopad 2016): 401–31. http://dx.doi.org/10.3366/cor.2016.0101.
Pełny tekst źródłaZhu, Quan, Xiaoyin Wang, Xuan Liu, Wanru Du i Xingxing Ding. "Multi-task learning for aspect level semantic classification combining complex aspect target semantic enhancement and adaptive local focus". Mathematical Biosciences and Engineering 20, nr 10 (2023): 18566–91. http://dx.doi.org/10.3934/mbe.2023824.
Pełny tekst źródłaSiddhant, Aditya, Anuj Goyal i Angeliki Metallinou. "Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4959–66. http://dx.doi.org/10.1609/aaai.v33i01.33014959.
Pełny tekst źródłaGao, Yunfan, Yun Xiong, Siqi Wang i Haofen Wang. "GeoBERT: Pre-Training Geospatial Representation Learning on Point-of-Interest". Applied Sciences 12, nr 24 (16.12.2022): 12942. http://dx.doi.org/10.3390/app122412942.
Pełny tekst źródłaChiang, Cheng-Han, i Hung-yi Lee. "On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 10 (28.06.2022): 10518–25. http://dx.doi.org/10.1609/aaai.v36i10.21295.
Pełny tekst źródłaLi, Yucheng, Frank Guerin i Chenghua Lin. "LatestEval: Addressing Data Contamination in Language Model Evaluation through Dynamic and Time-Sensitive Test Construction". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 17 (24.03.2024): 18600–18607. http://dx.doi.org/10.1609/aaai.v38i17.29822.
Pełny tekst źródłaKarimzadeh, Morteza, i Alan MacEachren. "GeoAnnotator: A Collaborative Semi-Automatic Platform for Constructing Geo-Annotated Text Corpora". ISPRS International Journal of Geo-Information 8, nr 4 (27.03.2019): 161. http://dx.doi.org/10.3390/ijgi8040161.
Pełny tekst źródłaBae, Jae Kwon. "A Study on Application of the Artificial Intelligence-Based Pre-trained Language Model". Academic Society of Global Business Administration 21, nr 2 (30.04.2024): 64–83. http://dx.doi.org/10.38115/asgba.2024.21.2.64.
Pełny tekst źródłaFang, Liuqin, Qing Ma i Jiahao Yan. "The effectiveness of corpus-based training on collocation use in L2 writing for Chinese senior secondary school students". Journal of China Computer-Assisted Language Learning 1, nr 1 (1.08.2021): 80–109. http://dx.doi.org/10.1515/jccall-2021-2004.
Pełny tekst źródłaKang, Yu, Tianqiao Liu, Hang Li, Yang Hao i Wenbiao Ding. "Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 10 (28.06.2022): 10875–83. http://dx.doi.org/10.1609/aaai.v36i10.21334.
Pełny tekst źródłaHe, Wanwei, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang i in. "GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-supervised Learning and Explicit Policy Injection". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 10 (28.06.2022): 10749–57. http://dx.doi.org/10.1609/aaai.v36i10.21320.
Pełny tekst źródłaGarrido-Muñoz , Ismael, Arturo Montejo-Ráez , Fernando Martínez-Santiago i L. Alfonso Ureña-López . "A Survey on Bias in Deep NLP". Applied Sciences 11, nr 7 (2.04.2021): 3184. http://dx.doi.org/10.3390/app11073184.
Pełny tekst źródłaPerkowski, Ernest, Rui Pan, Tuan Dung Nguyen, Yuan-Sen Ting, Sandor Kruk, Tong Zhang, Charlie O’Neill i in. "AstroLLaMA-Chat: Scaling AstroLLaMA with Conversational and Diverse Datasets". Research Notes of the AAS 8, nr 1 (8.01.2024): 7. http://dx.doi.org/10.3847/2515-5172/ad1abe.
Pełny tekst źródłaWang, Ke, Xiutian Zhao i Wei Peng. "Learning from Failure: Improving Meeting Summarization without Good Samples". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 17 (24.03.2024): 19153–61. http://dx.doi.org/10.1609/aaai.v38i17.29883.
Pełny tekst źródłaPota, Marco, Mirko Ventura, Rosario Catelli i Massimo Esposito. "An Effective BERT-Based Pipeline for Twitter Sentiment Analysis: A Case Study in Italian". Sensors 21, nr 1 (28.12.2020): 133. http://dx.doi.org/10.3390/s21010133.
Pełny tekst źródłaGonzález-Docasal, Ander, i Aitor Álvarez. "Enhancing Voice Cloning Quality through Data Selection and Alignment-Based Metrics". Applied Sciences 13, nr 14 (10.07.2023): 8049. http://dx.doi.org/10.3390/app13148049.
Pełny tekst źródłaVu, Dang Thanh, Gwanghyun Yu, Chilwoo Lee i Jinyoung Kim. "Text Data Augmentation for the Korean Language". Applied Sciences 12, nr 7 (28.03.2022): 3425. http://dx.doi.org/10.3390/app12073425.
Pełny tekst źródłaQi, Kunxun, i Jianfeng Du. "Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 05 (3.04.2020): 8632–39. http://dx.doi.org/10.1609/aaai.v34i05.6387.
Pełny tekst źródłaA. Brenes, Jose, Javier Ferrández-Pastor, José M. Cámara-Zapata i Gabriela Marín-Raventós. "Use of Hough Transform and Homography for the Creation of Image Corpora for Smart Agriculture". International Journal on Cybernetics & Informatics 12, nr 6 (7.10.2023): 09–19. http://dx.doi.org/10.5121/ijci.2023.120602.
Pełny tekst źródłaYang, Tiancheng, Ilia Sucholutsky, Kuang-Yu Jen i Matthias Schonlau. "exKidneyBERT: a language model for kidney transplant pathology reports and the crucial role of extended vocabularies". PeerJ Computer Science 10 (28.02.2024): e1888. http://dx.doi.org/10.7717/peerj-cs.1888.
Pełny tekst źródłaLi, Lei, Yongfeng Zhang i Li Chen. "Personalized Prompt Learning for Explainable Recommendation". ACM Transactions on Information Systems 41, nr 4 (23.03.2023): 1–26. http://dx.doi.org/10.1145/3580488.
Pełny tekst źródłaPanboonyuen, Teerapong, Kulsawasd Jitkajornwanich, Siam Lawawirojwong, Panu Srestasathiern i Peerapon Vateekul. "Semantic Segmentation on Remotely Sensed Images Using an Enhanced Global Convolutional Network with Channel Attention and Domain Specific Transfer Learning". Remote Sensing 11, nr 1 (4.01.2019): 83. http://dx.doi.org/10.3390/rs11010083.
Pełny tekst źródłaPanboonyuen, Teerapong, Kulsawasd Jitkajornwanich, Siam Lawawirojwong, Panu Srestasathiern i Peerapon Vateekul. "Transformer-Based Decoder Designs for Semantic Segmentation on Remotely Sensed Images". Remote Sensing 13, nr 24 (15.12.2021): 5100. http://dx.doi.org/10.3390/rs13245100.
Pełny tekst źródłaLiu, Rui, i Barzan Mozafari. "Transformer with Memory Replay". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 7 (28.06.2022): 7567–75. http://dx.doi.org/10.1609/aaai.v36i7.20722.
Pełny tekst źródłaLiu, Weijie, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng i Ping Wang. "K-BERT: Enabling Language Representation with Knowledge Graph". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 03 (3.04.2020): 2901–8. http://dx.doi.org/10.1609/aaai.v34i03.5681.
Pełny tekst źródłaPeng, Baolin, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden i Jianfeng Gao. "Soloist: BuildingTask Bots at Scale with Transfer Learning and Machine Teaching". Transactions of the Association for Computational Linguistics 9 (2021): 807–24. http://dx.doi.org/10.1162/tacl_a_00399.
Pełny tekst źródłaPalagin, O. V., V. Yu Velychko, K. S. Malakhov i O. S. Shchurov. "Distributional semantic modeling: a revised technique to train term/word vector space models applying the ontology-related approach". PROBLEMS IN PROGRAMMING, nr 2-3 (wrzesień 2020): 341–51. http://dx.doi.org/10.15407/pp2020.02-03.341.
Pełny tekst źródłaChoi, Yong-Seok, Yo-Han Park, Seung Yun, Sang-Hun Kim i Kong-Joo Lee. "Factors Behind the Effectiveness of an Unsupervised Neural Machine Translation System between Korean and Japanese". Applied Sciences 11, nr 16 (21.08.2021): 7662. http://dx.doi.org/10.3390/app11167662.
Pełny tekst źródłaZayed, Abdelrahman, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian i Sarath Chandar. "Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 12 (26.06.2023): 14593–601. http://dx.doi.org/10.1609/aaai.v37i12.26706.
Pełny tekst źródłaKeung, Phillip, Julian Salazar, Yichao Lu i Noah A. Smith. "Unsupervised Bitext Mining and Translation via Self-Trained Contextual Embeddings". Transactions of the Association for Computational Linguistics 8 (grudzień 2020): 828–41. http://dx.doi.org/10.1162/tacl_a_00348.
Pełny tekst źródłaLaucis, Rolands, i Gints Jēkabsons. "Evaluation of Word Embedding Models in Latvian NLP Tasks Based on Publicly Available Corpora". Applied Computer Systems 26, nr 2 (1.12.2021): 132–38. http://dx.doi.org/10.2478/acss-2021-0016.
Pełny tekst źródła