Articoli di riviste sul tema "Pretrained models"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Pretrained models".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.
Hofmann, Valentin, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert e Hinrich Schütze. "Geographic Adaptation of Pretrained Language Models". Transactions of the Association for Computational Linguistics 12 (2024): 411–31. http://dx.doi.org/10.1162/tacl_a_00652.
Testo completoBear Don’t Walk IV, Oliver J., Tony Sun, Adler Perotte e Noémie Elhadad. "Clinically relevant pretraining is all you need". Journal of the American Medical Informatics Association 28, n. 9 (21 giugno 2021): 1970–76. http://dx.doi.org/10.1093/jamia/ocab086.
Testo completoBasu, Sourya, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Vijil Chenthamarakshan, Kush R. Varshney, Lav R. Varshney e Payel Das. "Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 6 (26 giugno 2023): 6788–96. http://dx.doi.org/10.1609/aaai.v37i6.25832.
Testo completoWang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang e Zhengyu Ju. "Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning". Electronics 12, n. 16 (9 agosto 2023): 3391. http://dx.doi.org/10.3390/electronics12163391.
Testo completoParmonangan, Ivan Halim, Marsella Marsella, Doharfen Frans Rino Pardede, Katarina Prisca Rijanto, Stephanie Stephanie, Kreshna Adhitya Chandra Kesuma, Valentina Tiara Cahyaningtyas e Maria Susan Anggreainy. "Training CNN-based Model on Low Resource Hardware and Small Dataset for Early Prediction of Melanoma from Skin Lesion Images". Engineering, MAthematics and Computer Science (EMACS) Journal 5, n. 2 (31 maggio 2023): 41–46. http://dx.doi.org/10.21512/emacsjournal.v5i2.9904.
Testo completoEdman, Lukas, Gabriele Sarti, Antonio Toral, Gertjan van Noord e Arianna Bisazza. "Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation". Transactions of the Association for Computational Linguistics 12 (2024): 392–410. http://dx.doi.org/10.1162/tacl_a_00651.
Testo completoWon, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim e Kang-Min Kim. "University Student Dropout Prediction Using Pretrained Language Models". Applied Sciences 13, n. 12 (13 giugno 2023): 7073. http://dx.doi.org/10.3390/app13127073.
Testo completoZhou, Shengchao, Gaofeng Meng, Zhaoxiang Zhang, Richard Yi Da Xu e Shiming Xiang. "Robust Feature Rectification of Pretrained Vision Models for Object Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 3 (26 giugno 2023): 3796–804. http://dx.doi.org/10.1609/aaai.v37i3.25492.
Testo completoElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze e Yoav Goldberg. "Measuring and Improving Consistency in Pretrained Language Models". Transactions of the Association for Computational Linguistics 9 (2021): 1012–31. http://dx.doi.org/10.1162/tacl_a_00410.
Testo completoTakeoka, Kunihiro. "Low-resouce Taxonomy Enrichment with Pretrained Language Models". Journal of Natural Language Processing 29, n. 1 (2022): 259–63. http://dx.doi.org/10.5715/jnlp.29.259.
Testo completoSi, Chenglei, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Yasheng Wang, Qun Liu e Maosong Sun. "Sub-Character Tokenization for Chinese Pretrained Language Models". Transactions of the Association for Computational Linguistics 11 (18 maggio 2023): 469–87. http://dx.doi.org/10.1162/tacl_a_00560.
Testo completoRen, Guanyu. "Monkeypox Disease Detection with Pretrained Deep Learning Models". Information Technology and Control 52, n. 2 (15 luglio 2023): 288–96. http://dx.doi.org/10.5755/j01.itc.52.2.32803.
Testo completoChen, Zhi, Yuncong Liu, Lu Chen, Su Zhu, Mengyue Wu e Kai Yu. "OPAL: Ontology-Aware Pretrained Language Model for End-to-End Task-Oriented Dialogue". Transactions of the Association for Computational Linguistics 11 (2023): 68–84. http://dx.doi.org/10.1162/tacl_a_00534.
Testo completoChoi, Yong-Seok, Yo-Han Park e Kong Joo Lee. "Building a Korean morphological analyzer using two Korean BERT models". PeerJ Computer Science 8 (2 maggio 2022): e968. http://dx.doi.org/10.7717/peerj-cs.968.
Testo completoKim, Hyunil, Tae-Yeong Kwak, Hyeyoon Chang, Sun Woo Kim e Injung Kim. "RCKD: Response-Based Cross-Task Knowledge Distillation for Pathological Image Analysis". Bioengineering 10, n. 11 (2 novembre 2023): 1279. http://dx.doi.org/10.3390/bioengineering10111279.
Testo completoIvgi, Maor, Uri Shaham e Jonathan Berant. "Efficient Long-Text Understanding with Short-Text Models". Transactions of the Association for Computational Linguistics 11 (2023): 284–99. http://dx.doi.org/10.1162/tacl_a_00547.
Testo completoAlmonacid-Olleros, Guillermo, Gabino Almonacid, David Gil e Javier Medina-Quero. "Evaluation of Transfer Learning and Fine-Tuning to Nowcast Energy Generation of Photovoltaic Systems in Different Climates". Sustainability 14, n. 5 (7 marzo 2022): 3092. http://dx.doi.org/10.3390/su14053092.
Testo completoLee, Eunchan, Changhyeon Lee e Sangtae Ahn. "Comparative Study of Multiclass Text Classification in Research Proposals Using Pretrained Language Models". Applied Sciences 12, n. 9 (29 aprile 2022): 4522. http://dx.doi.org/10.3390/app12094522.
Testo completoMutreja, G., e K. Bittner. "EVALUATING CONVNET AND TRANSFORMER BASED SELF-SUPERVISED ALGORITHMS FOR BUILDING ROOF FORM CLASSIFICATION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13 dicembre 2023): 315–21. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-315-2023.
Testo completoMalyala, Sohith Sai, Janardhan Reddy Guntaka, Sai Vignesh Chintala, Lohith Vattikuti e SrinivasaRao Tummalapalli. "Exploring How AI Answering Models Understand and Respond in Context." International Journal for Research in Applied Science and Engineering Technology 11, n. 9 (30 settembre 2023): 224–28. http://dx.doi.org/10.22214/ijraset.2023.55597.
Testo completoDemircioğlu, Aydin. "Deep Features from Pretrained Networks Do Not Outperform Hand-Crafted Features in Radiomics". Diagnostics 13, n. 20 (20 ottobre 2023): 3266. http://dx.doi.org/10.3390/diagnostics13203266.
Testo completoKotei, Evans, e Ramkumar Thirunavukarasu. "A Systematic Review of Transformer-Based Pre-Trained Language Models through Self-Supervised Learning". Information 14, n. 3 (16 marzo 2023): 187. http://dx.doi.org/10.3390/info14030187.
Testo completoJackson, Richard G., Erik Jansson, Aron Lagerberg, Elliot Ford, Vladimir Poroshin, Timothy Scrivener, Mats Axelsson, Martin Johansson, Lesly Arun Franco e Eliseo Papa. "Ablations over transformer models for biomedical relationship extraction". F1000Research 9 (16 luglio 2020): 710. http://dx.doi.org/10.12688/f1000research.24552.1.
Testo completoSahel, S., M. Alsahafi, M. Alghamdi e T. Alsubait. "Logo Detection Using Deep Learning with Pretrained CNN Models". Engineering, Technology & Applied Science Research 11, n. 1 (6 febbraio 2021): 6724–29. http://dx.doi.org/10.48084/etasr.3919.
Testo completoJiang, Shengyi, Sihui Fu, Nankai Lin e Yingwen Fu. "Pretrained models and evaluation data for the Khmer language". Tsinghua Science and Technology 27, n. 4 (agosto 2022): 709–18. http://dx.doi.org/10.26599/tst.2021.9010060.
Testo completoZeng, Zhiyuan, e Deyi Xiong. "Unsupervised and few-shot parsing from pretrained language models". Artificial Intelligence 305 (aprile 2022): 103665. http://dx.doi.org/10.1016/j.artint.2022.103665.
Testo completoSaravagi, Deepika, Shweta Agrawal, Manisha Saravagi, Jyotir Moy Chatterjee e Mohit Agarwal. "Diagnosis of Lumbar Spondylolisthesis Using Optimized Pretrained CNN Models". Computational Intelligence and Neuroscience 2022 (13 aprile 2022): 1–12. http://dx.doi.org/10.1155/2022/7459260.
Testo completoElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze e Yoav Goldberg. "Erratum: Measuring and Improving Consistency in Pretrained Language Models". Transactions of the Association for Computational Linguistics 9 (2021): 1407. http://dx.doi.org/10.1162/tacl_x_00455.
Testo completoAl-Sarem, Mohammed, Mohammed Al-Asali, Ahmed Yaseen Alqutaibi e Faisal Saeed. "Enhanced Tooth Region Detection Using Pretrained Deep Learning Models". International Journal of Environmental Research and Public Health 19, n. 22 (21 novembre 2022): 15414. http://dx.doi.org/10.3390/ijerph192215414.
Testo completoXu, Canwen, e Julian McAuley. "A Survey on Model Compression and Acceleration for Pretrained Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 9 (26 giugno 2023): 10566–75. http://dx.doi.org/10.1609/aaai.v37i9.26255.
Testo completoLee, Chanhee, Kisu Yang, Taesun Whang, Chanjun Park, Andrew Matteson e Heuiseok Lim. "Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models". Applied Sciences 11, n. 5 (24 febbraio 2021): 1974. http://dx.doi.org/10.3390/app11051974.
Testo completoZhang, Wenbo, Xiao Li, Yating Yang, Rui Dong e Gongxu Luo. "Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation". Future Internet 12, n. 12 (27 novembre 2020): 215. http://dx.doi.org/10.3390/fi12120215.
Testo completoLobo, Fernando, Maily Selena González, Alicia Boto e José Manuel Pérez de la Lastra. "Prediction of Antifungal Activity of Antimicrobial Peptides by Transfer Learning from Protein Pretrained Models". International Journal of Molecular Sciences 24, n. 12 (17 giugno 2023): 10270. http://dx.doi.org/10.3390/ijms241210270.
Testo completoZhang, Tianyu, Jake Gu, Omid Ardakanian e Joyce Kim. "Addressing data inadequacy challenges in personal comfort models by combining pretrained comfort models". Energy and Buildings 264 (giugno 2022): 112068. http://dx.doi.org/10.1016/j.enbuild.2022.112068.
Testo completoYang, Xi, Jiang Bian, William R. Hogan e Yonghui Wu. "Clinical concept extraction using transformers". Journal of the American Medical Informatics Association 27, n. 12 (29 ottobre 2020): 1935–42. http://dx.doi.org/10.1093/jamia/ocaa189.
Testo completoDe Coster, Mathieu, e Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation". Information 13, n. 5 (23 aprile 2022): 220. http://dx.doi.org/10.3390/info13050220.
Testo completoAlOyaynaa, Sarah, e Yasser Kotb. "Arabic Grammatical Error Detection Using Transformers-based Pretrained Language Models". ITM Web of Conferences 56 (2023): 04009. http://dx.doi.org/10.1051/itmconf/20235604009.
Testo completoKalyan, Katikapalli Subramanyam, Ajit Rajasekharan e Sivanesan Sangeetha. "AMMU: A survey of transformer-based biomedical pretrained language models". Journal of Biomedical Informatics 126 (febbraio 2022): 103982. http://dx.doi.org/10.1016/j.jbi.2021.103982.
Testo completoSilver, Tom, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Kaelbling e Michael Katz. "Generalized Planning in PDDL Domains with Pretrained Large Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 18 (24 marzo 2024): 20256–64. http://dx.doi.org/10.1609/aaai.v38i18.30006.
Testo completoAhmad, Muhammad Shahrul Zaim, Nor Azlina Ab. Aziz e Anith Khairunnisa Ghazali. "Development of Automated Attendance System Using Pretrained Deep Learning Models". Vol. 6 No. 1 (2024) 6, n. 1 (30 aprile 2024): 6–12. http://dx.doi.org/10.33093/ijoras.2024.6.1.2.
Testo completoYulianto, Rudy, Faqihudin, Meika Syahbana Rusli, Adhitio Satyo Bayangkari Karno, Widi Hastomo, Aqwam Rosadi Kardian, Vany Terisia e Tri Surawan. "Innovative UNET-Based Steel Defect Detection Using 5 Pretrained Models". Evergreen 10, n. 4 (dicembre 2023): 2365–78. http://dx.doi.org/10.5109/7160923.
Testo completoYin, Yi, Weiming Zhang, Nenghai Yu e Kejiang Chen. "Steganalysis of neural networks based on parameter statistical bias". Journal of University of Science and Technology of China 52, n. 1 (2022): 1. http://dx.doi.org/10.52396/justc-2021-0197.
Testo completoAlZahrani, Fetoun Mansour, e Maha Al-Yahya. "A Transformer-Based Approach to Authorship Attribution in Classical Arabic Texts". Applied Sciences 13, n. 12 (18 giugno 2023): 7255. http://dx.doi.org/10.3390/app13127255.
Testo completoAlbashish, Dheeb. "Ensemble of adapted convolutional neural networks (CNN) methods for classifying colon histopathological images". PeerJ Computer Science 8 (5 luglio 2022): e1031. http://dx.doi.org/10.7717/peerj-cs.1031.
Testo completoPan, Yu, Ye Yuan, Yichun Yin, Jiaxin Shi, Zenglin Xu, Ming Zhang, Lifeng Shang, Xin Jiang e Qun Liu. "Preparing Lessons for Progressive Training on Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 17 (24 marzo 2024): 18860–68. http://dx.doi.org/10.1609/aaai.v38i17.29851.
Testo completoAnupriya, Anupriya. "Fine-tuning Pretrained Transformers for Sentiment Analysis on Twitter Data". Mathematical Statistician and Engineering Applications 70, n. 2 (26 febbraio 2021): 1344–52. http://dx.doi.org/10.17762/msea.v70i2.2326.
Testo completoZhang, Zhanhao. "The transferability of transfer learning model based on ImageNet for medical image classification tasks". Applied and Computational Engineering 18, n. 1 (23 ottobre 2023): 143–51. http://dx.doi.org/10.54254/2755-2721/18/20230980.
Testo completoAnton, Jonah, Liam Castelli, Mun Fai Chan, Mathilde Outters, Wan Hee Tang, Venus Cheung, Pancham Shukla, Rahee Walambe e Ketan Kotecha. "How Well Do Self-Supervised Models Transfer to Medical Imaging?" Journal of Imaging 8, n. 12 (1 dicembre 2022): 320. http://dx.doi.org/10.3390/jimaging8120320.
Testo completoSiahkoohi, Ali, Mathias Louboutin e Felix J. Herrmann. "The importance of transfer learning in seismic modeling and imaging". GEOPHYSICS 84, n. 6 (1 novembre 2019): A47—A52. http://dx.doi.org/10.1190/geo2019-0056.1.
Testo completoChen, Die, Hua Zhang, Zeqi Chen, Bo Xie e Ye Wang. "Comparative Analysis on Alignment-Based and Pretrained Feature Representations for the Identification of DNA-Binding Proteins". Computational and Mathematical Methods in Medicine 2022 (28 giugno 2022): 1–14. http://dx.doi.org/10.1155/2022/5847242.
Testo completo