Artículos de revistas sobre el tema "Pretrained models"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Pretrained models".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Hofmann, Valentin, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert y Hinrich Schütze. "Geographic Adaptation of Pretrained Language Models". Transactions of the Association for Computational Linguistics 12 (2024): 411–31. http://dx.doi.org/10.1162/tacl_a_00652.
Texto completoBear Don’t Walk IV, Oliver J., Tony Sun, Adler Perotte y Noémie Elhadad. "Clinically relevant pretraining is all you need". Journal of the American Medical Informatics Association 28, n.º 9 (21 de junio de 2021): 1970–76. http://dx.doi.org/10.1093/jamia/ocab086.
Texto completoBasu, Sourya, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Vijil Chenthamarakshan, Kush R. Varshney, Lav R. Varshney y Payel Das. "Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 6 (26 de junio de 2023): 6788–96. http://dx.doi.org/10.1609/aaai.v37i6.25832.
Texto completoWang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang y Zhengyu Ju. "Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning". Electronics 12, n.º 16 (9 de agosto de 2023): 3391. http://dx.doi.org/10.3390/electronics12163391.
Texto completoParmonangan, Ivan Halim, Marsella Marsella, Doharfen Frans Rino Pardede, Katarina Prisca Rijanto, Stephanie Stephanie, Kreshna Adhitya Chandra Kesuma, Valentina Tiara Cahyaningtyas y Maria Susan Anggreainy. "Training CNN-based Model on Low Resource Hardware and Small Dataset for Early Prediction of Melanoma from Skin Lesion Images". Engineering, MAthematics and Computer Science (EMACS) Journal 5, n.º 2 (31 de mayo de 2023): 41–46. http://dx.doi.org/10.21512/emacsjournal.v5i2.9904.
Texto completoEdman, Lukas, Gabriele Sarti, Antonio Toral, Gertjan van Noord y Arianna Bisazza. "Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation". Transactions of the Association for Computational Linguistics 12 (2024): 392–410. http://dx.doi.org/10.1162/tacl_a_00651.
Texto completoWon, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim y Kang-Min Kim. "University Student Dropout Prediction Using Pretrained Language Models". Applied Sciences 13, n.º 12 (13 de junio de 2023): 7073. http://dx.doi.org/10.3390/app13127073.
Texto completoZhou, Shengchao, Gaofeng Meng, Zhaoxiang Zhang, Richard Yi Da Xu y Shiming Xiang. "Robust Feature Rectification of Pretrained Vision Models for Object Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 3 (26 de junio de 2023): 3796–804. http://dx.doi.org/10.1609/aaai.v37i3.25492.
Texto completoElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze y Yoav Goldberg. "Measuring and Improving Consistency in Pretrained Language Models". Transactions of the Association for Computational Linguistics 9 (2021): 1012–31. http://dx.doi.org/10.1162/tacl_a_00410.
Texto completoTakeoka, Kunihiro. "Low-resouce Taxonomy Enrichment with Pretrained Language Models". Journal of Natural Language Processing 29, n.º 1 (2022): 259–63. http://dx.doi.org/10.5715/jnlp.29.259.
Texto completoSi, Chenglei, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Yasheng Wang, Qun Liu y Maosong Sun. "Sub-Character Tokenization for Chinese Pretrained Language Models". Transactions of the Association for Computational Linguistics 11 (18 de mayo de 2023): 469–87. http://dx.doi.org/10.1162/tacl_a_00560.
Texto completoRen, Guanyu. "Monkeypox Disease Detection with Pretrained Deep Learning Models". Information Technology and Control 52, n.º 2 (15 de julio de 2023): 288–96. http://dx.doi.org/10.5755/j01.itc.52.2.32803.
Texto completoChen, Zhi, Yuncong Liu, Lu Chen, Su Zhu, Mengyue Wu y Kai Yu. "OPAL: Ontology-Aware Pretrained Language Model for End-to-End Task-Oriented Dialogue". Transactions of the Association for Computational Linguistics 11 (2023): 68–84. http://dx.doi.org/10.1162/tacl_a_00534.
Texto completoChoi, Yong-Seok, Yo-Han Park y Kong Joo Lee. "Building a Korean morphological analyzer using two Korean BERT models". PeerJ Computer Science 8 (2 de mayo de 2022): e968. http://dx.doi.org/10.7717/peerj-cs.968.
Texto completoKim, Hyunil, Tae-Yeong Kwak, Hyeyoon Chang, Sun Woo Kim y Injung Kim. "RCKD: Response-Based Cross-Task Knowledge Distillation for Pathological Image Analysis". Bioengineering 10, n.º 11 (2 de noviembre de 2023): 1279. http://dx.doi.org/10.3390/bioengineering10111279.
Texto completoIvgi, Maor, Uri Shaham y Jonathan Berant. "Efficient Long-Text Understanding with Short-Text Models". Transactions of the Association for Computational Linguistics 11 (2023): 284–99. http://dx.doi.org/10.1162/tacl_a_00547.
Texto completoAlmonacid-Olleros, Guillermo, Gabino Almonacid, David Gil y Javier Medina-Quero. "Evaluation of Transfer Learning and Fine-Tuning to Nowcast Energy Generation of Photovoltaic Systems in Different Climates". Sustainability 14, n.º 5 (7 de marzo de 2022): 3092. http://dx.doi.org/10.3390/su14053092.
Texto completoLee, Eunchan, Changhyeon Lee y Sangtae Ahn. "Comparative Study of Multiclass Text Classification in Research Proposals Using Pretrained Language Models". Applied Sciences 12, n.º 9 (29 de abril de 2022): 4522. http://dx.doi.org/10.3390/app12094522.
Texto completoMutreja, G. y K. Bittner. "EVALUATING CONVNET AND TRANSFORMER BASED SELF-SUPERVISED ALGORITHMS FOR BUILDING ROOF FORM CLASSIFICATION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13 de diciembre de 2023): 315–21. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-315-2023.
Texto completoMalyala, Sohith Sai, Janardhan Reddy Guntaka, Sai Vignesh Chintala, Lohith Vattikuti y SrinivasaRao Tummalapalli. "Exploring How AI Answering Models Understand and Respond in Context." International Journal for Research in Applied Science and Engineering Technology 11, n.º 9 (30 de septiembre de 2023): 224–28. http://dx.doi.org/10.22214/ijraset.2023.55597.
Texto completoDemircioğlu, Aydin. "Deep Features from Pretrained Networks Do Not Outperform Hand-Crafted Features in Radiomics". Diagnostics 13, n.º 20 (20 de octubre de 2023): 3266. http://dx.doi.org/10.3390/diagnostics13203266.
Texto completoKotei, Evans y Ramkumar Thirunavukarasu. "A Systematic Review of Transformer-Based Pre-Trained Language Models through Self-Supervised Learning". Information 14, n.º 3 (16 de marzo de 2023): 187. http://dx.doi.org/10.3390/info14030187.
Texto completoJackson, Richard G., Erik Jansson, Aron Lagerberg, Elliot Ford, Vladimir Poroshin, Timothy Scrivener, Mats Axelsson, Martin Johansson, Lesly Arun Franco y Eliseo Papa. "Ablations over transformer models for biomedical relationship extraction". F1000Research 9 (16 de julio de 2020): 710. http://dx.doi.org/10.12688/f1000research.24552.1.
Texto completoSahel, S., M. Alsahafi, M. Alghamdi y T. Alsubait. "Logo Detection Using Deep Learning with Pretrained CNN Models". Engineering, Technology & Applied Science Research 11, n.º 1 (6 de febrero de 2021): 6724–29. http://dx.doi.org/10.48084/etasr.3919.
Texto completoJiang, Shengyi, Sihui Fu, Nankai Lin y Yingwen Fu. "Pretrained models and evaluation data for the Khmer language". Tsinghua Science and Technology 27, n.º 4 (agosto de 2022): 709–18. http://dx.doi.org/10.26599/tst.2021.9010060.
Texto completoZeng, Zhiyuan y Deyi Xiong. "Unsupervised and few-shot parsing from pretrained language models". Artificial Intelligence 305 (abril de 2022): 103665. http://dx.doi.org/10.1016/j.artint.2022.103665.
Texto completoSaravagi, Deepika, Shweta Agrawal, Manisha Saravagi, Jyotir Moy Chatterjee y Mohit Agarwal. "Diagnosis of Lumbar Spondylolisthesis Using Optimized Pretrained CNN Models". Computational Intelligence and Neuroscience 2022 (13 de abril de 2022): 1–12. http://dx.doi.org/10.1155/2022/7459260.
Texto completoElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze y Yoav Goldberg. "Erratum: Measuring and Improving Consistency in Pretrained Language Models". Transactions of the Association for Computational Linguistics 9 (2021): 1407. http://dx.doi.org/10.1162/tacl_x_00455.
Texto completoAl-Sarem, Mohammed, Mohammed Al-Asali, Ahmed Yaseen Alqutaibi y Faisal Saeed. "Enhanced Tooth Region Detection Using Pretrained Deep Learning Models". International Journal of Environmental Research and Public Health 19, n.º 22 (21 de noviembre de 2022): 15414. http://dx.doi.org/10.3390/ijerph192215414.
Texto completoXu, Canwen y Julian McAuley. "A Survey on Model Compression and Acceleration for Pretrained Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 9 (26 de junio de 2023): 10566–75. http://dx.doi.org/10.1609/aaai.v37i9.26255.
Texto completoLee, Chanhee, Kisu Yang, Taesun Whang, Chanjun Park, Andrew Matteson y Heuiseok Lim. "Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models". Applied Sciences 11, n.º 5 (24 de febrero de 2021): 1974. http://dx.doi.org/10.3390/app11051974.
Texto completoZhang, Wenbo, Xiao Li, Yating Yang, Rui Dong y Gongxu Luo. "Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation". Future Internet 12, n.º 12 (27 de noviembre de 2020): 215. http://dx.doi.org/10.3390/fi12120215.
Texto completoLobo, Fernando, Maily Selena González, Alicia Boto y José Manuel Pérez de la Lastra. "Prediction of Antifungal Activity of Antimicrobial Peptides by Transfer Learning from Protein Pretrained Models". International Journal of Molecular Sciences 24, n.º 12 (17 de junio de 2023): 10270. http://dx.doi.org/10.3390/ijms241210270.
Texto completoZhang, Tianyu, Jake Gu, Omid Ardakanian y Joyce Kim. "Addressing data inadequacy challenges in personal comfort models by combining pretrained comfort models". Energy and Buildings 264 (junio de 2022): 112068. http://dx.doi.org/10.1016/j.enbuild.2022.112068.
Texto completoYang, Xi, Jiang Bian, William R. Hogan y Yonghui Wu. "Clinical concept extraction using transformers". Journal of the American Medical Informatics Association 27, n.º 12 (29 de octubre de 2020): 1935–42. http://dx.doi.org/10.1093/jamia/ocaa189.
Texto completoDe Coster, Mathieu y Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation". Information 13, n.º 5 (23 de abril de 2022): 220. http://dx.doi.org/10.3390/info13050220.
Texto completoAlOyaynaa, Sarah y Yasser Kotb. "Arabic Grammatical Error Detection Using Transformers-based Pretrained Language Models". ITM Web of Conferences 56 (2023): 04009. http://dx.doi.org/10.1051/itmconf/20235604009.
Texto completoKalyan, Katikapalli Subramanyam, Ajit Rajasekharan y Sivanesan Sangeetha. "AMMU: A survey of transformer-based biomedical pretrained language models". Journal of Biomedical Informatics 126 (febrero de 2022): 103982. http://dx.doi.org/10.1016/j.jbi.2021.103982.
Texto completoSilver, Tom, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Kaelbling y Michael Katz. "Generalized Planning in PDDL Domains with Pretrained Large Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 18 (24 de marzo de 2024): 20256–64. http://dx.doi.org/10.1609/aaai.v38i18.30006.
Texto completoAhmad, Muhammad Shahrul Zaim, Nor Azlina Ab. Aziz y Anith Khairunnisa Ghazali. "Development of Automated Attendance System Using Pretrained Deep Learning Models". Vol. 6 No. 1 (2024) 6, n.º 1 (30 de abril de 2024): 6–12. http://dx.doi.org/10.33093/ijoras.2024.6.1.2.
Texto completoYulianto, Rudy, Faqihudin, Meika Syahbana Rusli, Adhitio Satyo Bayangkari Karno, Widi Hastomo, Aqwam Rosadi Kardian, Vany Terisia y Tri Surawan. "Innovative UNET-Based Steel Defect Detection Using 5 Pretrained Models". Evergreen 10, n.º 4 (diciembre de 2023): 2365–78. http://dx.doi.org/10.5109/7160923.
Texto completoYin, Yi, Weiming Zhang, Nenghai Yu y Kejiang Chen. "Steganalysis of neural networks based on parameter statistical bias". Journal of University of Science and Technology of China 52, n.º 1 (2022): 1. http://dx.doi.org/10.52396/justc-2021-0197.
Texto completoAlZahrani, Fetoun Mansour y Maha Al-Yahya. "A Transformer-Based Approach to Authorship Attribution in Classical Arabic Texts". Applied Sciences 13, n.º 12 (18 de junio de 2023): 7255. http://dx.doi.org/10.3390/app13127255.
Texto completoAlbashish, Dheeb. "Ensemble of adapted convolutional neural networks (CNN) methods for classifying colon histopathological images". PeerJ Computer Science 8 (5 de julio de 2022): e1031. http://dx.doi.org/10.7717/peerj-cs.1031.
Texto completoPan, Yu, Ye Yuan, Yichun Yin, Jiaxin Shi, Zenglin Xu, Ming Zhang, Lifeng Shang, Xin Jiang y Qun Liu. "Preparing Lessons for Progressive Training on Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de marzo de 2024): 18860–68. http://dx.doi.org/10.1609/aaai.v38i17.29851.
Texto completoAnupriya, Anupriya. "Fine-tuning Pretrained Transformers for Sentiment Analysis on Twitter Data". Mathematical Statistician and Engineering Applications 70, n.º 2 (26 de febrero de 2021): 1344–52. http://dx.doi.org/10.17762/msea.v70i2.2326.
Texto completoZhang, Zhanhao. "The transferability of transfer learning model based on ImageNet for medical image classification tasks". Applied and Computational Engineering 18, n.º 1 (23 de octubre de 2023): 143–51. http://dx.doi.org/10.54254/2755-2721/18/20230980.
Texto completoAnton, Jonah, Liam Castelli, Mun Fai Chan, Mathilde Outters, Wan Hee Tang, Venus Cheung, Pancham Shukla, Rahee Walambe y Ketan Kotecha. "How Well Do Self-Supervised Models Transfer to Medical Imaging?" Journal of Imaging 8, n.º 12 (1 de diciembre de 2022): 320. http://dx.doi.org/10.3390/jimaging8120320.
Texto completoSiahkoohi, Ali, Mathias Louboutin y Felix J. Herrmann. "The importance of transfer learning in seismic modeling and imaging". GEOPHYSICS 84, n.º 6 (1 de noviembre de 2019): A47—A52. http://dx.doi.org/10.1190/geo2019-0056.1.
Texto completoChen, Die, Hua Zhang, Zeqi Chen, Bo Xie y Ye Wang. "Comparative Analysis on Alignment-Based and Pretrained Feature Representations for the Identification of DNA-Binding Proteins". Computational and Mathematical Methods in Medicine 2022 (28 de junio de 2022): 1–14. http://dx.doi.org/10.1155/2022/5847242.
Texto completo