Zeitschriftenartikel zum Thema „Pretrained models“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Pretrained models" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Hofmann, Valentin, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert und Hinrich Schütze. „Geographic Adaptation of Pretrained Language Models“. Transactions of the Association for Computational Linguistics 12 (2024): 411–31. http://dx.doi.org/10.1162/tacl_a_00652.
Der volle Inhalt der QuelleBear Don’t Walk IV, Oliver J., Tony Sun, Adler Perotte und Noémie Elhadad. „Clinically relevant pretraining is all you need“. Journal of the American Medical Informatics Association 28, Nr. 9 (21.06.2021): 1970–76. http://dx.doi.org/10.1093/jamia/ocab086.
Der volle Inhalt der QuelleBasu, Sourya, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Vijil Chenthamarakshan, Kush R. Varshney, Lav R. Varshney und Payel Das. „Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 6 (26.06.2023): 6788–96. http://dx.doi.org/10.1609/aaai.v37i6.25832.
Der volle Inhalt der QuelleWang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang und Zhengyu Ju. „Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning“. Electronics 12, Nr. 16 (09.08.2023): 3391. http://dx.doi.org/10.3390/electronics12163391.
Der volle Inhalt der QuelleParmonangan, Ivan Halim, Marsella Marsella, Doharfen Frans Rino Pardede, Katarina Prisca Rijanto, Stephanie Stephanie, Kreshna Adhitya Chandra Kesuma, Valentina Tiara Cahyaningtyas und Maria Susan Anggreainy. „Training CNN-based Model on Low Resource Hardware and Small Dataset for Early Prediction of Melanoma from Skin Lesion Images“. Engineering, MAthematics and Computer Science (EMACS) Journal 5, Nr. 2 (31.05.2023): 41–46. http://dx.doi.org/10.21512/emacsjournal.v5i2.9904.
Der volle Inhalt der QuelleEdman, Lukas, Gabriele Sarti, Antonio Toral, Gertjan van Noord und Arianna Bisazza. „Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation“. Transactions of the Association for Computational Linguistics 12 (2024): 392–410. http://dx.doi.org/10.1162/tacl_a_00651.
Der volle Inhalt der QuelleWon, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim und Kang-Min Kim. „University Student Dropout Prediction Using Pretrained Language Models“. Applied Sciences 13, Nr. 12 (13.06.2023): 7073. http://dx.doi.org/10.3390/app13127073.
Der volle Inhalt der QuelleZhou, Shengchao, Gaofeng Meng, Zhaoxiang Zhang, Richard Yi Da Xu und Shiming Xiang. „Robust Feature Rectification of Pretrained Vision Models for Object Recognition“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 3 (26.06.2023): 3796–804. http://dx.doi.org/10.1609/aaai.v37i3.25492.
Der volle Inhalt der QuelleElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze und Yoav Goldberg. „Measuring and Improving Consistency in Pretrained Language Models“. Transactions of the Association for Computational Linguistics 9 (2021): 1012–31. http://dx.doi.org/10.1162/tacl_a_00410.
Der volle Inhalt der QuelleTakeoka, Kunihiro. „Low-resouce Taxonomy Enrichment with Pretrained Language Models“. Journal of Natural Language Processing 29, Nr. 1 (2022): 259–63. http://dx.doi.org/10.5715/jnlp.29.259.
Der volle Inhalt der QuelleSi, Chenglei, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Yasheng Wang, Qun Liu und Maosong Sun. „Sub-Character Tokenization for Chinese Pretrained Language Models“. Transactions of the Association for Computational Linguistics 11 (18.05.2023): 469–87. http://dx.doi.org/10.1162/tacl_a_00560.
Der volle Inhalt der QuelleRen, Guanyu. „Monkeypox Disease Detection with Pretrained Deep Learning Models“. Information Technology and Control 52, Nr. 2 (15.07.2023): 288–96. http://dx.doi.org/10.5755/j01.itc.52.2.32803.
Der volle Inhalt der QuelleChen, Zhi, Yuncong Liu, Lu Chen, Su Zhu, Mengyue Wu und Kai Yu. „OPAL: Ontology-Aware Pretrained Language Model for End-to-End Task-Oriented Dialogue“. Transactions of the Association for Computational Linguistics 11 (2023): 68–84. http://dx.doi.org/10.1162/tacl_a_00534.
Der volle Inhalt der QuelleChoi, Yong-Seok, Yo-Han Park und Kong Joo Lee. „Building a Korean morphological analyzer using two Korean BERT models“. PeerJ Computer Science 8 (02.05.2022): e968. http://dx.doi.org/10.7717/peerj-cs.968.
Der volle Inhalt der QuelleKim, Hyunil, Tae-Yeong Kwak, Hyeyoon Chang, Sun Woo Kim und Injung Kim. „RCKD: Response-Based Cross-Task Knowledge Distillation for Pathological Image Analysis“. Bioengineering 10, Nr. 11 (02.11.2023): 1279. http://dx.doi.org/10.3390/bioengineering10111279.
Der volle Inhalt der QuelleIvgi, Maor, Uri Shaham und Jonathan Berant. „Efficient Long-Text Understanding with Short-Text Models“. Transactions of the Association for Computational Linguistics 11 (2023): 284–99. http://dx.doi.org/10.1162/tacl_a_00547.
Der volle Inhalt der QuelleAlmonacid-Olleros, Guillermo, Gabino Almonacid, David Gil und Javier Medina-Quero. „Evaluation of Transfer Learning and Fine-Tuning to Nowcast Energy Generation of Photovoltaic Systems in Different Climates“. Sustainability 14, Nr. 5 (07.03.2022): 3092. http://dx.doi.org/10.3390/su14053092.
Der volle Inhalt der QuelleLee, Eunchan, Changhyeon Lee und Sangtae Ahn. „Comparative Study of Multiclass Text Classification in Research Proposals Using Pretrained Language Models“. Applied Sciences 12, Nr. 9 (29.04.2022): 4522. http://dx.doi.org/10.3390/app12094522.
Der volle Inhalt der QuelleMutreja, G., und K. Bittner. „EVALUATING CONVNET AND TRANSFORMER BASED SELF-SUPERVISED ALGORITHMS FOR BUILDING ROOF FORM CLASSIFICATION“. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13.12.2023): 315–21. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-315-2023.
Der volle Inhalt der QuelleMalyala, Sohith Sai, Janardhan Reddy Guntaka, Sai Vignesh Chintala, Lohith Vattikuti und SrinivasaRao Tummalapalli. „Exploring How AI Answering Models Understand and Respond in Context.“ International Journal for Research in Applied Science and Engineering Technology 11, Nr. 9 (30.09.2023): 224–28. http://dx.doi.org/10.22214/ijraset.2023.55597.
Der volle Inhalt der QuelleDemircioğlu, Aydin. „Deep Features from Pretrained Networks Do Not Outperform Hand-Crafted Features in Radiomics“. Diagnostics 13, Nr. 20 (20.10.2023): 3266. http://dx.doi.org/10.3390/diagnostics13203266.
Der volle Inhalt der QuelleKotei, Evans, und Ramkumar Thirunavukarasu. „A Systematic Review of Transformer-Based Pre-Trained Language Models through Self-Supervised Learning“. Information 14, Nr. 3 (16.03.2023): 187. http://dx.doi.org/10.3390/info14030187.
Der volle Inhalt der QuelleJackson, Richard G., Erik Jansson, Aron Lagerberg, Elliot Ford, Vladimir Poroshin, Timothy Scrivener, Mats Axelsson, Martin Johansson, Lesly Arun Franco und Eliseo Papa. „Ablations over transformer models for biomedical relationship extraction“. F1000Research 9 (16.07.2020): 710. http://dx.doi.org/10.12688/f1000research.24552.1.
Der volle Inhalt der QuelleSahel, S., M. Alsahafi, M. Alghamdi und T. Alsubait. „Logo Detection Using Deep Learning with Pretrained CNN Models“. Engineering, Technology & Applied Science Research 11, Nr. 1 (06.02.2021): 6724–29. http://dx.doi.org/10.48084/etasr.3919.
Der volle Inhalt der QuelleJiang, Shengyi, Sihui Fu, Nankai Lin und Yingwen Fu. „Pretrained models and evaluation data for the Khmer language“. Tsinghua Science and Technology 27, Nr. 4 (August 2022): 709–18. http://dx.doi.org/10.26599/tst.2021.9010060.
Der volle Inhalt der QuelleZeng, Zhiyuan, und Deyi Xiong. „Unsupervised and few-shot parsing from pretrained language models“. Artificial Intelligence 305 (April 2022): 103665. http://dx.doi.org/10.1016/j.artint.2022.103665.
Der volle Inhalt der QuelleSaravagi, Deepika, Shweta Agrawal, Manisha Saravagi, Jyotir Moy Chatterjee und Mohit Agarwal. „Diagnosis of Lumbar Spondylolisthesis Using Optimized Pretrained CNN Models“. Computational Intelligence and Neuroscience 2022 (13.04.2022): 1–12. http://dx.doi.org/10.1155/2022/7459260.
Der volle Inhalt der QuelleElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze und Yoav Goldberg. „Erratum: Measuring and Improving Consistency in Pretrained Language Models“. Transactions of the Association for Computational Linguistics 9 (2021): 1407. http://dx.doi.org/10.1162/tacl_x_00455.
Der volle Inhalt der QuelleAl-Sarem, Mohammed, Mohammed Al-Asali, Ahmed Yaseen Alqutaibi und Faisal Saeed. „Enhanced Tooth Region Detection Using Pretrained Deep Learning Models“. International Journal of Environmental Research and Public Health 19, Nr. 22 (21.11.2022): 15414. http://dx.doi.org/10.3390/ijerph192215414.
Der volle Inhalt der QuelleXu, Canwen, und Julian McAuley. „A Survey on Model Compression and Acceleration for Pretrained Language Models“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 9 (26.06.2023): 10566–75. http://dx.doi.org/10.1609/aaai.v37i9.26255.
Der volle Inhalt der QuelleLee, Chanhee, Kisu Yang, Taesun Whang, Chanjun Park, Andrew Matteson und Heuiseok Lim. „Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models“. Applied Sciences 11, Nr. 5 (24.02.2021): 1974. http://dx.doi.org/10.3390/app11051974.
Der volle Inhalt der QuelleZhang, Wenbo, Xiao Li, Yating Yang, Rui Dong und Gongxu Luo. „Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation“. Future Internet 12, Nr. 12 (27.11.2020): 215. http://dx.doi.org/10.3390/fi12120215.
Der volle Inhalt der QuelleLobo, Fernando, Maily Selena González, Alicia Boto und José Manuel Pérez de la Lastra. „Prediction of Antifungal Activity of Antimicrobial Peptides by Transfer Learning from Protein Pretrained Models“. International Journal of Molecular Sciences 24, Nr. 12 (17.06.2023): 10270. http://dx.doi.org/10.3390/ijms241210270.
Der volle Inhalt der QuelleZhang, Tianyu, Jake Gu, Omid Ardakanian und Joyce Kim. „Addressing data inadequacy challenges in personal comfort models by combining pretrained comfort models“. Energy and Buildings 264 (Juni 2022): 112068. http://dx.doi.org/10.1016/j.enbuild.2022.112068.
Der volle Inhalt der QuelleYang, Xi, Jiang Bian, William R. Hogan und Yonghui Wu. „Clinical concept extraction using transformers“. Journal of the American Medical Informatics Association 27, Nr. 12 (29.10.2020): 1935–42. http://dx.doi.org/10.1093/jamia/ocaa189.
Der volle Inhalt der QuelleDe Coster, Mathieu, und Joni Dambre. „Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation“. Information 13, Nr. 5 (23.04.2022): 220. http://dx.doi.org/10.3390/info13050220.
Der volle Inhalt der QuelleAlOyaynaa, Sarah, und Yasser Kotb. „Arabic Grammatical Error Detection Using Transformers-based Pretrained Language Models“. ITM Web of Conferences 56 (2023): 04009. http://dx.doi.org/10.1051/itmconf/20235604009.
Der volle Inhalt der QuelleKalyan, Katikapalli Subramanyam, Ajit Rajasekharan und Sivanesan Sangeetha. „AMMU: A survey of transformer-based biomedical pretrained language models“. Journal of Biomedical Informatics 126 (Februar 2022): 103982. http://dx.doi.org/10.1016/j.jbi.2021.103982.
Der volle Inhalt der QuelleSilver, Tom, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Kaelbling und Michael Katz. „Generalized Planning in PDDL Domains with Pretrained Large Language Models“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 18 (24.03.2024): 20256–64. http://dx.doi.org/10.1609/aaai.v38i18.30006.
Der volle Inhalt der QuelleAhmad, Muhammad Shahrul Zaim, Nor Azlina Ab. Aziz und Anith Khairunnisa Ghazali. „Development of Automated Attendance System Using Pretrained Deep Learning Models“. Vol. 6 No. 1 (2024) 6, Nr. 1 (30.04.2024): 6–12. http://dx.doi.org/10.33093/ijoras.2024.6.1.2.
Der volle Inhalt der QuelleYulianto, Rudy, Faqihudin, Meika Syahbana Rusli, Adhitio Satyo Bayangkari Karno, Widi Hastomo, Aqwam Rosadi Kardian, Vany Terisia und Tri Surawan. „Innovative UNET-Based Steel Defect Detection Using 5 Pretrained Models“. Evergreen 10, Nr. 4 (Dezember 2023): 2365–78. http://dx.doi.org/10.5109/7160923.
Der volle Inhalt der QuelleYin, Yi, Weiming Zhang, Nenghai Yu und Kejiang Chen. „Steganalysis of neural networks based on parameter statistical bias“. Journal of University of Science and Technology of China 52, Nr. 1 (2022): 1. http://dx.doi.org/10.52396/justc-2021-0197.
Der volle Inhalt der QuelleAlZahrani, Fetoun Mansour, und Maha Al-Yahya. „A Transformer-Based Approach to Authorship Attribution in Classical Arabic Texts“. Applied Sciences 13, Nr. 12 (18.06.2023): 7255. http://dx.doi.org/10.3390/app13127255.
Der volle Inhalt der QuelleAlbashish, Dheeb. „Ensemble of adapted convolutional neural networks (CNN) methods for classifying colon histopathological images“. PeerJ Computer Science 8 (05.07.2022): e1031. http://dx.doi.org/10.7717/peerj-cs.1031.
Der volle Inhalt der QuellePan, Yu, Ye Yuan, Yichun Yin, Jiaxin Shi, Zenglin Xu, Ming Zhang, Lifeng Shang, Xin Jiang und Qun Liu. „Preparing Lessons for Progressive Training on Language Models“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 17 (24.03.2024): 18860–68. http://dx.doi.org/10.1609/aaai.v38i17.29851.
Der volle Inhalt der QuelleAnupriya, Anupriya. „Fine-tuning Pretrained Transformers for Sentiment Analysis on Twitter Data“. Mathematical Statistician and Engineering Applications 70, Nr. 2 (26.02.2021): 1344–52. http://dx.doi.org/10.17762/msea.v70i2.2326.
Der volle Inhalt der QuelleZhang, Zhanhao. „The transferability of transfer learning model based on ImageNet for medical image classification tasks“. Applied and Computational Engineering 18, Nr. 1 (23.10.2023): 143–51. http://dx.doi.org/10.54254/2755-2721/18/20230980.
Der volle Inhalt der QuelleAnton, Jonah, Liam Castelli, Mun Fai Chan, Mathilde Outters, Wan Hee Tang, Venus Cheung, Pancham Shukla, Rahee Walambe und Ketan Kotecha. „How Well Do Self-Supervised Models Transfer to Medical Imaging?“ Journal of Imaging 8, Nr. 12 (01.12.2022): 320. http://dx.doi.org/10.3390/jimaging8120320.
Der volle Inhalt der QuelleSiahkoohi, Ali, Mathias Louboutin und Felix J. Herrmann. „The importance of transfer learning in seismic modeling and imaging“. GEOPHYSICS 84, Nr. 6 (01.11.2019): A47—A52. http://dx.doi.org/10.1190/geo2019-0056.1.
Der volle Inhalt der QuelleChen, Die, Hua Zhang, Zeqi Chen, Bo Xie und Ye Wang. „Comparative Analysis on Alignment-Based and Pretrained Feature Representations for the Identification of DNA-Binding Proteins“. Computational and Mathematical Methods in Medicine 2022 (28.06.2022): 1–14. http://dx.doi.org/10.1155/2022/5847242.
Der volle Inhalt der Quelle