Literatura académica sobre el tema "Pre-training corpora"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Pre-training corpora".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Pre-training corpora"
Sun, Yu, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu y Haifeng Wang. "ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8968–75. http://dx.doi.org/10.1609/aaai.v34i05.6428.
Texto completoMoodaley, Wayne y Arnesh Telukdarie. "A Conceptual Framework for Subdomain Specific Pre-Training of Large Language Models for Green Claim Detection". European Journal of Sustainable Development 12, n.º 4 (1 de octubre de 2023): 319. http://dx.doi.org/10.14207/ejsd.2023.v12n4p319.
Texto completoLiu, Yinhan, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis y Luke Zettlemoyer. "Multilingual Denoising Pre-training for Neural Machine Translation". Transactions of the Association for Computational Linguistics 8 (noviembre de 2020): 726–42. http://dx.doi.org/10.1162/tacl_a_00343.
Texto completoDean, Roger Thornton y Marcus Thomas Pearce. "Algorithmically-generated Corpora that use Serial Compositional Principles Can Contribute to the Modeling of Sequential Pitch Structure in Non-tonal Music". Empirical Musicology Review 11, n.º 1 (8 de julio de 2016): 27. http://dx.doi.org/10.18061/emr.v11i1.4900.
Texto completoYuan, Sha, Hanyu Zhao, Zhengxiao Du, Ming Ding, Xiao Liu, Yukuo Cen, Xu Zou, Zhilin Yang y Jie Tang. "WuDaoCorpora: A super large-scale Chinese corpora for pre-training language models". AI Open 2 (2021): 65–68. http://dx.doi.org/10.1016/j.aiopen.2021.06.001.
Texto completoKreutzer, Julia, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo et al. "Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets". Transactions of the Association for Computational Linguistics 10 (2022): 50–72. http://dx.doi.org/10.1162/tacl_a_00447.
Texto completoQian, Jing, Yong Yue, Katie Atkinson y Gangmin Li. "Understanding Chinese Moral Stories with Further Pre-Training". International Journal on Natural Language Computing 12, n.º 2 (29 de abril de 2023): 01–12. http://dx.doi.org/10.5121/ijnlc.2023.12201.
Texto completoJiang, Xiaoze, Yaobo Liang, Weizhu Chen y Nan Duan. "XLM-K: Improving Cross-Lingual Language Model Pre-training with Multilingual Knowledge". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 10840–48. http://dx.doi.org/10.1609/aaai.v36i10.21330.
Texto completoKajiwara, Tomoyuki, Biwa Miura y Yuki Arase. "Monolingual Transfer Learning via Bilingual Translators for Style-Sensitive Paraphrase Generation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8042–49. http://dx.doi.org/10.1609/aaai.v34i05.6314.
Texto completoKryeziu, Labehat y Visar Shehu. "Pre-Training MLM Using Bert for the Albanian Language". SEEU Review 18, n.º 1 (1 de junio de 2023): 52–62. http://dx.doi.org/10.2478/seeur-2023-0035.
Texto completoTesis sobre el tema "Pre-training corpora"
Ortiz, Suarez Pedro. "A Data-driven Approach to Natural Language Processing for Contemporary and Historical French". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS155.
Texto completoIn recent years, neural methods for Natural Language Processing (NLP) have consistently and repeatedly improved the state of the art in a wide variety of NLP tasks. One of the main contributing reasons for this steady improvement is the increased use of transfer learning techniques. These methods consist in taking a pre-trained model and reusing it, with little to no further training, to solve other tasks. Even though these models have clear advantages, their main drawback is the amount of data that is needed to pre-train them. The lack of availability of large-scale data previously hindered the development of such models for contemporary French, and even more so for its historical states.In this thesis, we focus on developing corpora for the pre-training of these transfer learning architectures. This approach proves to be extremely effective, as we are able to establish a new state of the art for a wide range of tasks in NLP for contemporary, medieval and early modern French as well as for six other contemporary languages. Furthermore, we are able to determine, not only that these models are extremely sensitive to pre-training data quality, heterogeneity and balance, but we also show that these three features are better predictors of the pre-trained models' performance in downstream tasks than the pre-training data size itself. In fact, we determine that the importance of the pre-training dataset size was largely overestimated, as we are able to repeatedly show that such models can be pre-trained with corpora of a modest size
Libros sobre el tema "Pre-training corpora"
Humphreys, S. C. Kinship in Ancient Athens. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198788249.001.0001.
Texto completoPeters, Thomas A. Library Programs Online. ABC-CLIO, LLC, 2009. http://dx.doi.org/10.5040/9798400679216.
Texto completoCapítulos de libros sobre el tema "Pre-training corpora"
Mahamoud, Ibrahim Souleiman, Mickaël Coustaty, Aurélie Joseph, Vincent Poulain d’Andecy y Jean-Marc Ogier. "KAP: Pre-training Transformers for Corporate Documents Understanding". En Document Analysis and Recognition – ICDAR 2023 Workshops, 65–79. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-41501-2_5.
Texto completoSiva Raju, S. y Khushboo Ahire. "Enhancing the Quality of Pre-school Education Through Training of Anganwadi Workers: A CSR Initiative". En Corporate Social Responsibility in India, 81–95. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3902-7_5.
Texto completoStevens, Meg, Georgina Kennedy y Timothy Churches. "Applying and Improving a Publicly Available Medication NER Pipeline in a Clinical Cancer EMR". En Studies in Health Technology and Informatics. IOS Press, 2024. http://dx.doi.org/10.3233/shti231051.
Texto completoJiang, Eric P. "Automatic Text Classification from Labeled and Unlabeled Data". En Intelligent Data Analysis for Real-Life Applications, 249–64. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1806-0.ch013.
Texto completoSyed, Mahanazuddin, Shaymaa Al-Shukri, Shorabuddin Syed, Kevin Sexton, Melody L. Greer, Meredith Zozus, Sudeepa Bhattacharyya y Fred Prior. "DeIDNER Corpus: Annotation of Clinical Discharge Summary Notes for Named Entity Recognition Using BRAT Tool". En Studies in Health Technology and Informatics. IOS Press, 2021. http://dx.doi.org/10.3233/shti210195.
Texto completoRevenko, Artem, Victor Mireles, Anna Breit, Peter Bourgonje, Julian Moreno-Schneider, Maria Khvalchik y Georg Rehm. "Learning Ontology Classes from Text by Clustering Lexical Substitutes Derived from Language Models1". En Towards a Knowledge-Aware AI. IOS Press, 2022. http://dx.doi.org/10.3233/ssw220018.
Texto completoIyer, Usha. "Introduction". En Dancing Women, 1–26. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190938734.003.0001.
Texto completoArya, Ali. "Content Description for Face Animation". En Encyclopedia of Information Science and Technology, First Edition, 546–49. IGI Global, 2005. http://dx.doi.org/10.4018/978-1-59140-553-5.ch096.
Texto completoBier, Ada y Elena Borsetto. "Bisogni e preoccupazioni del corpo docente impegnato in English Medium Instruction (EMI) Una prospettiva italiana post-pandemia". En La linguistica educativa tra ricerca e sperimentazione Scritti in onore di Carmel Mary Coonan. Venice: Fondazione Università Ca’ Foscari, 2023. http://dx.doi.org/10.30687/978-88-6969-683-1/018.
Texto completoActas de conferencias sobre el tema "Pre-training corpora"
Vu, Thuy-Trang, Xuanli He, Gholamreza Haffari y Ehsan Shareghi. "Koala: An Index for Quantifying Overlaps with Pre-training Corpora". En Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.emnlp-demo.7.
Texto completoLiu, Zhuang, Degen Huang, Kaiyu Huang, Zhuang Li y Jun Zhao. "FinBERT: A Pre-trained Financial Language Representation Model for Financial Text Mining". En Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/622.
Texto completoQian, Jing, Yong Yue, Katie Atkinson y Gangmin Li. "Knowledge-Enriched Moral Understanding upon Continual Pre-training". En 10th International Conference on Computer Networks & Communications (CCNET 2023). Academy and Industry Research Collaboration Center (AIRCC), 2023. http://dx.doi.org/10.5121/csit.2023.130414.
Texto completoLu, Jinliang, Yu Lu y Jiajun Zhang. "Take a Closer Look at Multilinguality! Improve Multilingual Pre-Training Using Monolingual Corpora Only". En Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-emnlp.190.
Texto completoWang, Xin'ao, Huan Li, Ke Chen y Lidan Shou. "FedBFPT: An Efficient Federated Learning Framework for Bert Further Pre-training". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/483.
Texto completoQu, Yuanbin, Peihan Liu, Wei Song, Lizhen Liu y Miaomiao Cheng. "A Text Generation and Prediction System: Pre-training on New Corpora Using BERT and GPT-2". En 2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC). IEEE, 2020. http://dx.doi.org/10.1109/iceiec49280.2020.9152352.
Texto completoZan, Daoguang, Bei Chen, Dejian Yang, Zeqi Lin, Minsu Kim, Bei Guan, Yongji Wang, Weizhu Chen y Jian-Guang Lou. "CERT: Continual Pre-training on Sketches for Library-oriented Code Generation". En Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/329.
Texto completoEdwards, Aleksandra, Jose Camacho-Collados, Hélène De Ribaupierre y Alun Preece. "Go Simple and Pre-Train on Domain-Specific Corpora: On the Role of Training Data for Text Classification". En Proceedings of the 28th International Conference on Computational Linguistics. Stroudsburg, PA, USA: International Committee on Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.coling-main.481.
Texto completoEdwards, Aleksandra, Jose Camacho-Collados, Hélène De Ribaupierre y Alun Preece. "Go Simple and Pre-Train on Domain-Specific Corpora: On the Role of Training Data for Text Classification". En Proceedings of the 28th International Conference on Computational Linguistics. Stroudsburg, PA, USA: International Committee on Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.coling-main.481.
Texto completoFlorencio, Felipe de A., Matheus S. de Lacerda, Anderson P. Cavalcanti y Vitor Rolim. "Three-Layer Denoiser: Denoising Parallel Corpora for NMT Systems". En Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/eniac.2023.234268.
Texto completo