Journal articles on the topic 'Pretrained models'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Pretrained models.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Hofmann, Valentin, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert, and Hinrich Schütze. "Geographic Adaptation of Pretrained Language Models." Transactions of the Association for Computational Linguistics 12 (2024): 411–31. http://dx.doi.org/10.1162/tacl_a_00652.
Bear Don’t Walk IV, Oliver J., Tony Sun, Adler Perotte, and Noémie Elhadad. "Clinically relevant pretraining is all you need." Journal of the American Medical Informatics Association 28, no. 9 (June 21, 2021): 1970–76. http://dx.doi.org/10.1093/jamia/ocab086.
Basu, Sourya, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Vijil Chenthamarakshan, Kush R. Varshney, Lav R. Varshney, and Payel Das. "Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6788–96. http://dx.doi.org/10.1609/aaai.v37i6.25832.
Wang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang, and Zhengyu Ju. "Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning." Electronics 12, no. 16 (August 9, 2023): 3391. http://dx.doi.org/10.3390/electronics12163391.
Parmonangan, Ivan Halim, Marsella Marsella, Doharfen Frans Rino Pardede, Katarina Prisca Rijanto, Stephanie Stephanie, Kreshna Adhitya Chandra Kesuma, Valentina Tiara Cahyaningtyas, and Maria Susan Anggreainy. "Training CNN-based Model on Low Resource Hardware and Small Dataset for Early Prediction of Melanoma from Skin Lesion Images." Engineering, MAthematics and Computer Science (EMACS) Journal 5, no. 2 (May 31, 2023): 41–46. http://dx.doi.org/10.21512/emacsjournal.v5i2.9904.
Edman, Lukas, Gabriele Sarti, Antonio Toral, Gertjan van Noord, and Arianna Bisazza. "Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation." Transactions of the Association for Computational Linguistics 12 (2024): 392–410. http://dx.doi.org/10.1162/tacl_a_00651.
Won, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim, and Kang-Min Kim. "University Student Dropout Prediction Using Pretrained Language Models." Applied Sciences 13, no. 12 (June 13, 2023): 7073. http://dx.doi.org/10.3390/app13127073.
Zhou, Shengchao, Gaofeng Meng, Zhaoxiang Zhang, Richard Yi Da Xu, and Shiming Xiang. "Robust Feature Rectification of Pretrained Vision Models for Object Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 3796–804. http://dx.doi.org/10.1609/aaai.v37i3.25492.
Elazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. "Measuring and Improving Consistency in Pretrained Language Models." Transactions of the Association for Computational Linguistics 9 (2021): 1012–31. http://dx.doi.org/10.1162/tacl_a_00410.
Takeoka, Kunihiro. "Low-resouce Taxonomy Enrichment with Pretrained Language Models." Journal of Natural Language Processing 29, no. 1 (2022): 259–63. http://dx.doi.org/10.5715/jnlp.29.259.
Si, Chenglei, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. "Sub-Character Tokenization for Chinese Pretrained Language Models." Transactions of the Association for Computational Linguistics 11 (May 18, 2023): 469–87. http://dx.doi.org/10.1162/tacl_a_00560.
Ren, Guanyu. "Monkeypox Disease Detection with Pretrained Deep Learning Models." Information Technology and Control 52, no. 2 (July 15, 2023): 288–96. http://dx.doi.org/10.5755/j01.itc.52.2.32803.
Chen, Zhi, Yuncong Liu, Lu Chen, Su Zhu, Mengyue Wu, and Kai Yu. "OPAL: Ontology-Aware Pretrained Language Model for End-to-End Task-Oriented Dialogue." Transactions of the Association for Computational Linguistics 11 (2023): 68–84. http://dx.doi.org/10.1162/tacl_a_00534.
Choi, Yong-Seok, Yo-Han Park, and Kong Joo Lee. "Building a Korean morphological analyzer using two Korean BERT models." PeerJ Computer Science 8 (May 2, 2022): e968. http://dx.doi.org/10.7717/peerj-cs.968.
Kim, Hyunil, Tae-Yeong Kwak, Hyeyoon Chang, Sun Woo Kim, and Injung Kim. "RCKD: Response-Based Cross-Task Knowledge Distillation for Pathological Image Analysis." Bioengineering 10, no. 11 (November 2, 2023): 1279. http://dx.doi.org/10.3390/bioengineering10111279.
Ivgi, Maor, Uri Shaham, and Jonathan Berant. "Efficient Long-Text Understanding with Short-Text Models." Transactions of the Association for Computational Linguistics 11 (2023): 284–99. http://dx.doi.org/10.1162/tacl_a_00547.
Almonacid-Olleros, Guillermo, Gabino Almonacid, David Gil, and Javier Medina-Quero. "Evaluation of Transfer Learning and Fine-Tuning to Nowcast Energy Generation of Photovoltaic Systems in Different Climates." Sustainability 14, no. 5 (March 7, 2022): 3092. http://dx.doi.org/10.3390/su14053092.
Lee, Eunchan, Changhyeon Lee, and Sangtae Ahn. "Comparative Study of Multiclass Text Classification in Research Proposals Using Pretrained Language Models." Applied Sciences 12, no. 9 (April 29, 2022): 4522. http://dx.doi.org/10.3390/app12094522.
Mutreja, G., and K. Bittner. "EVALUATING CONVNET AND TRANSFORMER BASED SELF-SUPERVISED ALGORITHMS FOR BUILDING ROOF FORM CLASSIFICATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (December 13, 2023): 315–21. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-315-2023.
Malyala, Sohith Sai, Janardhan Reddy Guntaka, Sai Vignesh Chintala, Lohith Vattikuti, and SrinivasaRao Tummalapalli. "Exploring How AI Answering Models Understand and Respond in Context." International Journal for Research in Applied Science and Engineering Technology 11, no. 9 (September 30, 2023): 224–28. http://dx.doi.org/10.22214/ijraset.2023.55597.
Demircioğlu, Aydin. "Deep Features from Pretrained Networks Do Not Outperform Hand-Crafted Features in Radiomics." Diagnostics 13, no. 20 (October 20, 2023): 3266. http://dx.doi.org/10.3390/diagnostics13203266.
Kotei, Evans, and Ramkumar Thirunavukarasu. "A Systematic Review of Transformer-Based Pre-Trained Language Models through Self-Supervised Learning." Information 14, no. 3 (March 16, 2023): 187. http://dx.doi.org/10.3390/info14030187.
Jackson, Richard G., Erik Jansson, Aron Lagerberg, Elliot Ford, Vladimir Poroshin, Timothy Scrivener, Mats Axelsson, Martin Johansson, Lesly Arun Franco, and Eliseo Papa. "Ablations over transformer models for biomedical relationship extraction." F1000Research 9 (July 16, 2020): 710. http://dx.doi.org/10.12688/f1000research.24552.1.
Sahel, S., M. Alsahafi, M. Alghamdi, and T. Alsubait. "Logo Detection Using Deep Learning with Pretrained CNN Models." Engineering, Technology & Applied Science Research 11, no. 1 (February 6, 2021): 6724–29. http://dx.doi.org/10.48084/etasr.3919.
Jiang, Shengyi, Sihui Fu, Nankai Lin, and Yingwen Fu. "Pretrained models and evaluation data for the Khmer language." Tsinghua Science and Technology 27, no. 4 (August 2022): 709–18. http://dx.doi.org/10.26599/tst.2021.9010060.
Zeng, Zhiyuan, and Deyi Xiong. "Unsupervised and few-shot parsing from pretrained language models." Artificial Intelligence 305 (April 2022): 103665. http://dx.doi.org/10.1016/j.artint.2022.103665.
Saravagi, Deepika, Shweta Agrawal, Manisha Saravagi, Jyotir Moy Chatterjee, and Mohit Agarwal. "Diagnosis of Lumbar Spondylolisthesis Using Optimized Pretrained CNN Models." Computational Intelligence and Neuroscience 2022 (April 13, 2022): 1–12. http://dx.doi.org/10.1155/2022/7459260.
Elazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. "Erratum: Measuring and Improving Consistency in Pretrained Language Models." Transactions of the Association for Computational Linguistics 9 (2021): 1407. http://dx.doi.org/10.1162/tacl_x_00455.
Al-Sarem, Mohammed, Mohammed Al-Asali, Ahmed Yaseen Alqutaibi, and Faisal Saeed. "Enhanced Tooth Region Detection Using Pretrained Deep Learning Models." International Journal of Environmental Research and Public Health 19, no. 22 (November 21, 2022): 15414. http://dx.doi.org/10.3390/ijerph192215414.
Xu, Canwen, and Julian McAuley. "A Survey on Model Compression and Acceleration for Pretrained Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10566–75. http://dx.doi.org/10.1609/aaai.v37i9.26255.
Lee, Chanhee, Kisu Yang, Taesun Whang, Chanjun Park, Andrew Matteson, and Heuiseok Lim. "Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models." Applied Sciences 11, no. 5 (February 24, 2021): 1974. http://dx.doi.org/10.3390/app11051974.
Zhang, Wenbo, Xiao Li, Yating Yang, Rui Dong, and Gongxu Luo. "Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation." Future Internet 12, no. 12 (November 27, 2020): 215. http://dx.doi.org/10.3390/fi12120215.
Lobo, Fernando, Maily Selena González, Alicia Boto, and José Manuel Pérez de la Lastra. "Prediction of Antifungal Activity of Antimicrobial Peptides by Transfer Learning from Protein Pretrained Models." International Journal of Molecular Sciences 24, no. 12 (June 17, 2023): 10270. http://dx.doi.org/10.3390/ijms241210270.
Zhang, Tianyu, Jake Gu, Omid Ardakanian, and Joyce Kim. "Addressing data inadequacy challenges in personal comfort models by combining pretrained comfort models." Energy and Buildings 264 (June 2022): 112068. http://dx.doi.org/10.1016/j.enbuild.2022.112068.
Yang, Xi, Jiang Bian, William R. Hogan, and Yonghui Wu. "Clinical concept extraction using transformers." Journal of the American Medical Informatics Association 27, no. 12 (October 29, 2020): 1935–42. http://dx.doi.org/10.1093/jamia/ocaa189.
De Coster, Mathieu, and Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation." Information 13, no. 5 (April 23, 2022): 220. http://dx.doi.org/10.3390/info13050220.
AlOyaynaa, Sarah, and Yasser Kotb. "Arabic Grammatical Error Detection Using Transformers-based Pretrained Language Models." ITM Web of Conferences 56 (2023): 04009. http://dx.doi.org/10.1051/itmconf/20235604009.
Kalyan, Katikapalli Subramanyam, Ajit Rajasekharan, and Sivanesan Sangeetha. "AMMU: A survey of transformer-based biomedical pretrained language models." Journal of Biomedical Informatics 126 (February 2022): 103982. http://dx.doi.org/10.1016/j.jbi.2021.103982.
Silver, Tom, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Kaelbling, and Michael Katz. "Generalized Planning in PDDL Domains with Pretrained Large Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (March 24, 2024): 20256–64. http://dx.doi.org/10.1609/aaai.v38i18.30006.
Ahmad, Muhammad Shahrul Zaim, Nor Azlina Ab. Aziz, and Anith Khairunnisa Ghazali. "Development of Automated Attendance System Using Pretrained Deep Learning Models." Vol. 6 No. 1 (2024) 6, no. 1 (April 30, 2024): 6–12. http://dx.doi.org/10.33093/ijoras.2024.6.1.2.
Yulianto, Rudy, Faqihudin, Meika Syahbana Rusli, Adhitio Satyo Bayangkari Karno, Widi Hastomo, Aqwam Rosadi Kardian, Vany Terisia, and Tri Surawan. "Innovative UNET-Based Steel Defect Detection Using 5 Pretrained Models." Evergreen 10, no. 4 (December 2023): 2365–78. http://dx.doi.org/10.5109/7160923.
Yin, Yi, Weiming Zhang, Nenghai Yu, and Kejiang Chen. "Steganalysis of neural networks based on parameter statistical bias." Journal of University of Science and Technology of China 52, no. 1 (2022): 1. http://dx.doi.org/10.52396/justc-2021-0197.
AlZahrani, Fetoun Mansour, and Maha Al-Yahya. "A Transformer-Based Approach to Authorship Attribution in Classical Arabic Texts." Applied Sciences 13, no. 12 (June 18, 2023): 7255. http://dx.doi.org/10.3390/app13127255.
Albashish, Dheeb. "Ensemble of adapted convolutional neural networks (CNN) methods for classifying colon histopathological images." PeerJ Computer Science 8 (July 5, 2022): e1031. http://dx.doi.org/10.7717/peerj-cs.1031.
Pan, Yu, Ye Yuan, Yichun Yin, Jiaxin Shi, Zenglin Xu, Ming Zhang, Lifeng Shang, Xin Jiang, and Qun Liu. "Preparing Lessons for Progressive Training on Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 17 (March 24, 2024): 18860–68. http://dx.doi.org/10.1609/aaai.v38i17.29851.
Anupriya, Anupriya. "Fine-tuning Pretrained Transformers for Sentiment Analysis on Twitter Data." Mathematical Statistician and Engineering Applications 70, no. 2 (February 26, 2021): 1344–52. http://dx.doi.org/10.17762/msea.v70i2.2326.
Zhang, Zhanhao. "The transferability of transfer learning model based on ImageNet for medical image classification tasks." Applied and Computational Engineering 18, no. 1 (October 23, 2023): 143–51. http://dx.doi.org/10.54254/2755-2721/18/20230980.
Anton, Jonah, Liam Castelli, Mun Fai Chan, Mathilde Outters, Wan Hee Tang, Venus Cheung, Pancham Shukla, Rahee Walambe, and Ketan Kotecha. "How Well Do Self-Supervised Models Transfer to Medical Imaging?" Journal of Imaging 8, no. 12 (December 1, 2022): 320. http://dx.doi.org/10.3390/jimaging8120320.
Siahkoohi, Ali, Mathias Louboutin, and Felix J. Herrmann. "The importance of transfer learning in seismic modeling and imaging." GEOPHYSICS 84, no. 6 (November 1, 2019): A47—A52. http://dx.doi.org/10.1190/geo2019-0056.1.
Chen, Die, Hua Zhang, Zeqi Chen, Bo Xie, and Ye Wang. "Comparative Analysis on Alignment-Based and Pretrained Feature Representations for the Identification of DNA-Binding Proteins." Computational and Mathematical Methods in Medicine 2022 (June 28, 2022): 1–14. http://dx.doi.org/10.1155/2022/5847242.