Artykuły w czasopismach na temat „Pretrained models”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Pretrained models”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Hofmann, Valentin, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert i Hinrich Schütze. "Geographic Adaptation of Pretrained Language Models". Transactions of the Association for Computational Linguistics 12 (2024): 411–31. http://dx.doi.org/10.1162/tacl_a_00652.
Pełny tekst źródłaBear Don’t Walk IV, Oliver J., Tony Sun, Adler Perotte i Noémie Elhadad. "Clinically relevant pretraining is all you need". Journal of the American Medical Informatics Association 28, nr 9 (21.06.2021): 1970–76. http://dx.doi.org/10.1093/jamia/ocab086.
Pełny tekst źródłaBasu, Sourya, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Vijil Chenthamarakshan, Kush R. Varshney, Lav R. Varshney i Payel Das. "Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 6 (26.06.2023): 6788–96. http://dx.doi.org/10.1609/aaai.v37i6.25832.
Pełny tekst źródłaWang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang i Zhengyu Ju. "Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning". Electronics 12, nr 16 (9.08.2023): 3391. http://dx.doi.org/10.3390/electronics12163391.
Pełny tekst źródłaParmonangan, Ivan Halim, Marsella Marsella, Doharfen Frans Rino Pardede, Katarina Prisca Rijanto, Stephanie Stephanie, Kreshna Adhitya Chandra Kesuma, Valentina Tiara Cahyaningtyas i Maria Susan Anggreainy. "Training CNN-based Model on Low Resource Hardware and Small Dataset for Early Prediction of Melanoma from Skin Lesion Images". Engineering, MAthematics and Computer Science (EMACS) Journal 5, nr 2 (31.05.2023): 41–46. http://dx.doi.org/10.21512/emacsjournal.v5i2.9904.
Pełny tekst źródłaEdman, Lukas, Gabriele Sarti, Antonio Toral, Gertjan van Noord i Arianna Bisazza. "Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation". Transactions of the Association for Computational Linguistics 12 (2024): 392–410. http://dx.doi.org/10.1162/tacl_a_00651.
Pełny tekst źródłaWon, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim i Kang-Min Kim. "University Student Dropout Prediction Using Pretrained Language Models". Applied Sciences 13, nr 12 (13.06.2023): 7073. http://dx.doi.org/10.3390/app13127073.
Pełny tekst źródłaZhou, Shengchao, Gaofeng Meng, Zhaoxiang Zhang, Richard Yi Da Xu i Shiming Xiang. "Robust Feature Rectification of Pretrained Vision Models for Object Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 3 (26.06.2023): 3796–804. http://dx.doi.org/10.1609/aaai.v37i3.25492.
Pełny tekst źródłaElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze i Yoav Goldberg. "Measuring and Improving Consistency in Pretrained Language Models". Transactions of the Association for Computational Linguistics 9 (2021): 1012–31. http://dx.doi.org/10.1162/tacl_a_00410.
Pełny tekst źródłaTakeoka, Kunihiro. "Low-resouce Taxonomy Enrichment with Pretrained Language Models". Journal of Natural Language Processing 29, nr 1 (2022): 259–63. http://dx.doi.org/10.5715/jnlp.29.259.
Pełny tekst źródłaSi, Chenglei, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Yasheng Wang, Qun Liu i Maosong Sun. "Sub-Character Tokenization for Chinese Pretrained Language Models". Transactions of the Association for Computational Linguistics 11 (18.05.2023): 469–87. http://dx.doi.org/10.1162/tacl_a_00560.
Pełny tekst źródłaRen, Guanyu. "Monkeypox Disease Detection with Pretrained Deep Learning Models". Information Technology and Control 52, nr 2 (15.07.2023): 288–96. http://dx.doi.org/10.5755/j01.itc.52.2.32803.
Pełny tekst źródłaChen, Zhi, Yuncong Liu, Lu Chen, Su Zhu, Mengyue Wu i Kai Yu. "OPAL: Ontology-Aware Pretrained Language Model for End-to-End Task-Oriented Dialogue". Transactions of the Association for Computational Linguistics 11 (2023): 68–84. http://dx.doi.org/10.1162/tacl_a_00534.
Pełny tekst źródłaChoi, Yong-Seok, Yo-Han Park i Kong Joo Lee. "Building a Korean morphological analyzer using two Korean BERT models". PeerJ Computer Science 8 (2.05.2022): e968. http://dx.doi.org/10.7717/peerj-cs.968.
Pełny tekst źródłaKim, Hyunil, Tae-Yeong Kwak, Hyeyoon Chang, Sun Woo Kim i Injung Kim. "RCKD: Response-Based Cross-Task Knowledge Distillation for Pathological Image Analysis". Bioengineering 10, nr 11 (2.11.2023): 1279. http://dx.doi.org/10.3390/bioengineering10111279.
Pełny tekst źródłaIvgi, Maor, Uri Shaham i Jonathan Berant. "Efficient Long-Text Understanding with Short-Text Models". Transactions of the Association for Computational Linguistics 11 (2023): 284–99. http://dx.doi.org/10.1162/tacl_a_00547.
Pełny tekst źródłaAlmonacid-Olleros, Guillermo, Gabino Almonacid, David Gil i Javier Medina-Quero. "Evaluation of Transfer Learning and Fine-Tuning to Nowcast Energy Generation of Photovoltaic Systems in Different Climates". Sustainability 14, nr 5 (7.03.2022): 3092. http://dx.doi.org/10.3390/su14053092.
Pełny tekst źródłaLee, Eunchan, Changhyeon Lee i Sangtae Ahn. "Comparative Study of Multiclass Text Classification in Research Proposals Using Pretrained Language Models". Applied Sciences 12, nr 9 (29.04.2022): 4522. http://dx.doi.org/10.3390/app12094522.
Pełny tekst źródłaMutreja, G., i K. Bittner. "EVALUATING CONVNET AND TRANSFORMER BASED SELF-SUPERVISED ALGORITHMS FOR BUILDING ROOF FORM CLASSIFICATION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13.12.2023): 315–21. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-315-2023.
Pełny tekst źródłaMalyala, Sohith Sai, Janardhan Reddy Guntaka, Sai Vignesh Chintala, Lohith Vattikuti i SrinivasaRao Tummalapalli. "Exploring How AI Answering Models Understand and Respond in Context." International Journal for Research in Applied Science and Engineering Technology 11, nr 9 (30.09.2023): 224–28. http://dx.doi.org/10.22214/ijraset.2023.55597.
Pełny tekst źródłaDemircioğlu, Aydin. "Deep Features from Pretrained Networks Do Not Outperform Hand-Crafted Features in Radiomics". Diagnostics 13, nr 20 (20.10.2023): 3266. http://dx.doi.org/10.3390/diagnostics13203266.
Pełny tekst źródłaKotei, Evans, i Ramkumar Thirunavukarasu. "A Systematic Review of Transformer-Based Pre-Trained Language Models through Self-Supervised Learning". Information 14, nr 3 (16.03.2023): 187. http://dx.doi.org/10.3390/info14030187.
Pełny tekst źródłaJackson, Richard G., Erik Jansson, Aron Lagerberg, Elliot Ford, Vladimir Poroshin, Timothy Scrivener, Mats Axelsson, Martin Johansson, Lesly Arun Franco i Eliseo Papa. "Ablations over transformer models for biomedical relationship extraction". F1000Research 9 (16.07.2020): 710. http://dx.doi.org/10.12688/f1000research.24552.1.
Pełny tekst źródłaSahel, S., M. Alsahafi, M. Alghamdi i T. Alsubait. "Logo Detection Using Deep Learning with Pretrained CNN Models". Engineering, Technology & Applied Science Research 11, nr 1 (6.02.2021): 6724–29. http://dx.doi.org/10.48084/etasr.3919.
Pełny tekst źródłaJiang, Shengyi, Sihui Fu, Nankai Lin i Yingwen Fu. "Pretrained models and evaluation data for the Khmer language". Tsinghua Science and Technology 27, nr 4 (sierpień 2022): 709–18. http://dx.doi.org/10.26599/tst.2021.9010060.
Pełny tekst źródłaZeng, Zhiyuan, i Deyi Xiong. "Unsupervised and few-shot parsing from pretrained language models". Artificial Intelligence 305 (kwiecień 2022): 103665. http://dx.doi.org/10.1016/j.artint.2022.103665.
Pełny tekst źródłaSaravagi, Deepika, Shweta Agrawal, Manisha Saravagi, Jyotir Moy Chatterjee i Mohit Agarwal. "Diagnosis of Lumbar Spondylolisthesis Using Optimized Pretrained CNN Models". Computational Intelligence and Neuroscience 2022 (13.04.2022): 1–12. http://dx.doi.org/10.1155/2022/7459260.
Pełny tekst źródłaElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze i Yoav Goldberg. "Erratum: Measuring and Improving Consistency in Pretrained Language Models". Transactions of the Association for Computational Linguistics 9 (2021): 1407. http://dx.doi.org/10.1162/tacl_x_00455.
Pełny tekst źródłaAl-Sarem, Mohammed, Mohammed Al-Asali, Ahmed Yaseen Alqutaibi i Faisal Saeed. "Enhanced Tooth Region Detection Using Pretrained Deep Learning Models". International Journal of Environmental Research and Public Health 19, nr 22 (21.11.2022): 15414. http://dx.doi.org/10.3390/ijerph192215414.
Pełny tekst źródłaXu, Canwen, i Julian McAuley. "A Survey on Model Compression and Acceleration for Pretrained Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 9 (26.06.2023): 10566–75. http://dx.doi.org/10.1609/aaai.v37i9.26255.
Pełny tekst źródłaLee, Chanhee, Kisu Yang, Taesun Whang, Chanjun Park, Andrew Matteson i Heuiseok Lim. "Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models". Applied Sciences 11, nr 5 (24.02.2021): 1974. http://dx.doi.org/10.3390/app11051974.
Pełny tekst źródłaZhang, Wenbo, Xiao Li, Yating Yang, Rui Dong i Gongxu Luo. "Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation". Future Internet 12, nr 12 (27.11.2020): 215. http://dx.doi.org/10.3390/fi12120215.
Pełny tekst źródłaLobo, Fernando, Maily Selena González, Alicia Boto i José Manuel Pérez de la Lastra. "Prediction of Antifungal Activity of Antimicrobial Peptides by Transfer Learning from Protein Pretrained Models". International Journal of Molecular Sciences 24, nr 12 (17.06.2023): 10270. http://dx.doi.org/10.3390/ijms241210270.
Pełny tekst źródłaZhang, Tianyu, Jake Gu, Omid Ardakanian i Joyce Kim. "Addressing data inadequacy challenges in personal comfort models by combining pretrained comfort models". Energy and Buildings 264 (czerwiec 2022): 112068. http://dx.doi.org/10.1016/j.enbuild.2022.112068.
Pełny tekst źródłaYang, Xi, Jiang Bian, William R. Hogan i Yonghui Wu. "Clinical concept extraction using transformers". Journal of the American Medical Informatics Association 27, nr 12 (29.10.2020): 1935–42. http://dx.doi.org/10.1093/jamia/ocaa189.
Pełny tekst źródłaDe Coster, Mathieu, i Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation". Information 13, nr 5 (23.04.2022): 220. http://dx.doi.org/10.3390/info13050220.
Pełny tekst źródłaAlOyaynaa, Sarah, i Yasser Kotb. "Arabic Grammatical Error Detection Using Transformers-based Pretrained Language Models". ITM Web of Conferences 56 (2023): 04009. http://dx.doi.org/10.1051/itmconf/20235604009.
Pełny tekst źródłaKalyan, Katikapalli Subramanyam, Ajit Rajasekharan i Sivanesan Sangeetha. "AMMU: A survey of transformer-based biomedical pretrained language models". Journal of Biomedical Informatics 126 (luty 2022): 103982. http://dx.doi.org/10.1016/j.jbi.2021.103982.
Pełny tekst źródłaSilver, Tom, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Kaelbling i Michael Katz. "Generalized Planning in PDDL Domains with Pretrained Large Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 18 (24.03.2024): 20256–64. http://dx.doi.org/10.1609/aaai.v38i18.30006.
Pełny tekst źródłaAhmad, Muhammad Shahrul Zaim, Nor Azlina Ab. Aziz i Anith Khairunnisa Ghazali. "Development of Automated Attendance System Using Pretrained Deep Learning Models". Vol. 6 No. 1 (2024) 6, nr 1 (30.04.2024): 6–12. http://dx.doi.org/10.33093/ijoras.2024.6.1.2.
Pełny tekst źródłaYulianto, Rudy, Faqihudin, Meika Syahbana Rusli, Adhitio Satyo Bayangkari Karno, Widi Hastomo, Aqwam Rosadi Kardian, Vany Terisia i Tri Surawan. "Innovative UNET-Based Steel Defect Detection Using 5 Pretrained Models". Evergreen 10, nr 4 (grudzień 2023): 2365–78. http://dx.doi.org/10.5109/7160923.
Pełny tekst źródłaYin, Yi, Weiming Zhang, Nenghai Yu i Kejiang Chen. "Steganalysis of neural networks based on parameter statistical bias". Journal of University of Science and Technology of China 52, nr 1 (2022): 1. http://dx.doi.org/10.52396/justc-2021-0197.
Pełny tekst źródłaAlZahrani, Fetoun Mansour, i Maha Al-Yahya. "A Transformer-Based Approach to Authorship Attribution in Classical Arabic Texts". Applied Sciences 13, nr 12 (18.06.2023): 7255. http://dx.doi.org/10.3390/app13127255.
Pełny tekst źródłaAlbashish, Dheeb. "Ensemble of adapted convolutional neural networks (CNN) methods for classifying colon histopathological images". PeerJ Computer Science 8 (5.07.2022): e1031. http://dx.doi.org/10.7717/peerj-cs.1031.
Pełny tekst źródłaPan, Yu, Ye Yuan, Yichun Yin, Jiaxin Shi, Zenglin Xu, Ming Zhang, Lifeng Shang, Xin Jiang i Qun Liu. "Preparing Lessons for Progressive Training on Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 17 (24.03.2024): 18860–68. http://dx.doi.org/10.1609/aaai.v38i17.29851.
Pełny tekst źródłaAnupriya, Anupriya. "Fine-tuning Pretrained Transformers for Sentiment Analysis on Twitter Data". Mathematical Statistician and Engineering Applications 70, nr 2 (26.02.2021): 1344–52. http://dx.doi.org/10.17762/msea.v70i2.2326.
Pełny tekst źródłaZhang, Zhanhao. "The transferability of transfer learning model based on ImageNet for medical image classification tasks". Applied and Computational Engineering 18, nr 1 (23.10.2023): 143–51. http://dx.doi.org/10.54254/2755-2721/18/20230980.
Pełny tekst źródłaAnton, Jonah, Liam Castelli, Mun Fai Chan, Mathilde Outters, Wan Hee Tang, Venus Cheung, Pancham Shukla, Rahee Walambe i Ketan Kotecha. "How Well Do Self-Supervised Models Transfer to Medical Imaging?" Journal of Imaging 8, nr 12 (1.12.2022): 320. http://dx.doi.org/10.3390/jimaging8120320.
Pełny tekst źródłaSiahkoohi, Ali, Mathias Louboutin i Felix J. Herrmann. "The importance of transfer learning in seismic modeling and imaging". GEOPHYSICS 84, nr 6 (1.11.2019): A47—A52. http://dx.doi.org/10.1190/geo2019-0056.1.
Pełny tekst źródłaChen, Die, Hua Zhang, Zeqi Chen, Bo Xie i Ye Wang. "Comparative Analysis on Alignment-Based and Pretrained Feature Representations for the Identification of DNA-Binding Proteins". Computational and Mathematical Methods in Medicine 2022 (28.06.2022): 1–14. http://dx.doi.org/10.1155/2022/5847242.
Pełny tekst źródła