Gotowa bibliografia na temat „Pretrained models”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Pretrained models”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Pretrained models"
Hofmann, Valentin, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert i Hinrich Schütze. "Geographic Adaptation of Pretrained Language Models". Transactions of the Association for Computational Linguistics 12 (2024): 411–31. http://dx.doi.org/10.1162/tacl_a_00652.
Pełny tekst źródłaBear Don’t Walk IV, Oliver J., Tony Sun, Adler Perotte i Noémie Elhadad. "Clinically relevant pretraining is all you need". Journal of the American Medical Informatics Association 28, nr 9 (21.06.2021): 1970–76. http://dx.doi.org/10.1093/jamia/ocab086.
Pełny tekst źródłaBasu, Sourya, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Vijil Chenthamarakshan, Kush R. Varshney, Lav R. Varshney i Payel Das. "Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 6 (26.06.2023): 6788–96. http://dx.doi.org/10.1609/aaai.v37i6.25832.
Pełny tekst źródłaWang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang i Zhengyu Ju. "Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning". Electronics 12, nr 16 (9.08.2023): 3391. http://dx.doi.org/10.3390/electronics12163391.
Pełny tekst źródłaParmonangan, Ivan Halim, Marsella Marsella, Doharfen Frans Rino Pardede, Katarina Prisca Rijanto, Stephanie Stephanie, Kreshna Adhitya Chandra Kesuma, Valentina Tiara Cahyaningtyas i Maria Susan Anggreainy. "Training CNN-based Model on Low Resource Hardware and Small Dataset for Early Prediction of Melanoma from Skin Lesion Images". Engineering, MAthematics and Computer Science (EMACS) Journal 5, nr 2 (31.05.2023): 41–46. http://dx.doi.org/10.21512/emacsjournal.v5i2.9904.
Pełny tekst źródłaEdman, Lukas, Gabriele Sarti, Antonio Toral, Gertjan van Noord i Arianna Bisazza. "Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation". Transactions of the Association for Computational Linguistics 12 (2024): 392–410. http://dx.doi.org/10.1162/tacl_a_00651.
Pełny tekst źródłaWon, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim i Kang-Min Kim. "University Student Dropout Prediction Using Pretrained Language Models". Applied Sciences 13, nr 12 (13.06.2023): 7073. http://dx.doi.org/10.3390/app13127073.
Pełny tekst źródłaZhou, Shengchao, Gaofeng Meng, Zhaoxiang Zhang, Richard Yi Da Xu i Shiming Xiang. "Robust Feature Rectification of Pretrained Vision Models for Object Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 3 (26.06.2023): 3796–804. http://dx.doi.org/10.1609/aaai.v37i3.25492.
Pełny tekst źródłaElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze i Yoav Goldberg. "Measuring and Improving Consistency in Pretrained Language Models". Transactions of the Association for Computational Linguistics 9 (2021): 1012–31. http://dx.doi.org/10.1162/tacl_a_00410.
Pełny tekst źródłaTakeoka, Kunihiro. "Low-resouce Taxonomy Enrichment with Pretrained Language Models". Journal of Natural Language Processing 29, nr 1 (2022): 259–63. http://dx.doi.org/10.5715/jnlp.29.259.
Pełny tekst źródłaRozprawy doktorskie na temat "Pretrained models"
Neupane, Aashish. "Visual Saliency Analysis on Fashion Images Using Image Processing and Deep Learning Approaches". OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2784.
Pełny tekst źródłaPelloin, Valentin. "La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés". Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.
Pełny tekst źródłaIn this thesis, spoken language understanding (SLU) is studied in the application context of telephone dialogues with defined goals (hotel booking reservations, for example). Historically, SLU was performed through a cascade of systems: a first system would transcribe the speech into words, and a natural language understanding system would link those words to a semantic annotation. The development of deep neural methods has led to the emergence of end-to-end architectures, where the understanding task is performed by a single system, applied directly to the speech signal to extract the semantic annotation. Recently, so-called self-supervised learning (SSL) pre-trained models have brought new advances in natural language processing (NLP). Learned in a generic way on very large datasets, they can then be adapted for other applications. To date, the best SLU results have been obtained with pipeline systems incorporating SSL models.However, none of the architectures, pipeline or end-to-end, is perfect. In this thesis, we study these architectures and propose hybrid versions that attempt to benefit from the advantages of each. After developing a state-of-the-art end-to-end SLU model, we evaluated different hybrid strategies. The advances made by SSL models during the course of this thesis led us to integrate them into our hybrid architecture
Kulhánek, Jonáš. "End-to-end dialogové systémy s předtrénovanými jazykovými modely". Master's thesis, 2021. http://www.nusl.cz/ntk/nusl-448383.
Pełny tekst źródłaCzęści książek na temat "Pretrained models"
Gad, Ahmed Fawzy. "Deploying Pretrained Models". W Practical Computer Vision Applications Using Deep Learning with CNNs, 295–338. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4167-7_7.
Pełny tekst źródłaJain, Shashank Mohan. "Fine-Tuning Pretrained Models". W Introduction to Transformers for NLP, 137–51. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8844-3_6.
Pełny tekst źródłaSun, Kaili, Xudong Luo i Michael Y. Luo. "A Survey of Pretrained Language Models". W Knowledge Science, Engineering and Management, 442–56. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10986-7_36.
Pełny tekst źródłaSouza, Fábio, Rodrigo Nogueira i Roberto Lotufo. "BERTimbau: Pretrained BERT Models for Brazilian Portuguese". W Intelligent Systems, 403–17. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61377-8_28.
Pełny tekst źródłaSong, Yunfeng, Xiaochao Fan, Yong Yang, Ge Ren i Weiming Pan. "Large Pretrained Models on Multimodal Sentiment Analysis". W Lecture Notes in Electrical Engineering, 506–13. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9423-3_63.
Pełny tekst źródłaLovón-Melgarejo, Jesús, Jose G. Moreno, Romaric Besançon, Olivier Ferret i Lynda Tamine. "Probing Pretrained Language Models with Hierarchy Properties". W Lecture Notes in Computer Science, 126–42. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-56060-6_9.
Pełny tekst źródłaYarlagadda, Madhulika, Susrutha Ettimalla i Bhanu Sri Davuluri. "Zero-Shot Document Classification Using Pretrained Models". W Multifaceted approaches for Data Acquisition, Processing & Communication, 104–10. London: CRC Press, 2024. http://dx.doi.org/10.1201/9781003470939-14.
Pełny tekst źródłaTan, Zhen, Lu Cheng, Song Wang, Bo Yuan, Jundong Li i Huan Liu. "Interpreting Pretrained Language Models via Concept Bottlenecks". W Advances in Knowledge Discovery and Data Mining, 56–74. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-2259-4_5.
Pełny tekst źródłaHao, Kaifeng, Jianfeng Li, Cuiqin Hou, Xuexuan Wang i Pengyu Li. "Combining Pretrained and Graph Models for Text Classification". W Communications in Computer and Information Science, 422–29. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92307-5_49.
Pełny tekst źródłaNi, Bolin, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang i Haibin Ling. "Expanding Language-Image Pretrained Models for General Video Recognition". W Lecture Notes in Computer Science, 1–18. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19772-7_1.
Pełny tekst źródłaStreszczenia konferencji na temat "Pretrained models"
Zhang, Zhiyuan, Xiaoqian Liu, Yi Zhang, Qi Su, Xu Sun i Bin He. "Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models". W Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.25.
Pełny tekst źródłaChen, Catherine, Kevin Lin i Dan Klein. "Constructing Taxonomies from Pretrained Language Models". W Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.373.
Pełny tekst źródłaKoto, Fajri, Jey Han Lau i Timothy Baldwin. "Discourse Probing of Pretrained Language Models". W Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.301.
Pełny tekst źródłaZhou, Jingren. "Large-scale Multi-Modality Pretrained Models". W MM '21: ACM Multimedia Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3474085.3480241.
Pełny tekst źródłaDavison, Joe, Joshua Feldman i Alexander Rush. "Commonsense Knowledge Mining from Pretrained Models". W Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-1109.
Pełny tekst źródłaWeller, Orion, Marc Marone, Vladimir Braverman, Dawn Lawrie i Benjamin Van Durme. "Pretrained Models for Multilingual Federated Learning". W Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.naacl-main.101.
Pełny tekst źródłaTroshin, Sergey, i Nadezhda Chirkova. "Probing Pretrained Models of Source Codes". W Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.blackboxnlp-1.31.
Pełny tekst źródłaDalvi, Fahim, Hassan Sajjad, Nadir Durrani i Yonatan Belinkov. "Analyzing Redundancy in Pretrained Transformer Models". W Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.398.
Pełny tekst źródłaTamkin, Alex, Trisha Singh, Davide Giovanardi i Noah Goodman. "Investigating Transferability in Pretrained Language Models". W Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.125.
Pełny tekst źródłaKurita, Keita, Paul Michel i Graham Neubig. "Weight Poisoning Attacks on Pretrained Models". W Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.249.
Pełny tekst źródłaRaporty organizacyjne na temat "Pretrained models"
Lohn, Andrew. Poison in the Well: Securing the Shared Resources of Machine Learning. Center for Security and Emerging Technology, czerwiec 2021. http://dx.doi.org/10.51593/2020ca013.
Pełny tekst źródłaShrestha, Tanuja, Mir A. Matin, Vishwas Chitale i Samuel Thomas. Exploring the potential of deep learning for classifying camera trap data: A case study from Nepal - working paper. International Centre for Integrated Mountain Development (ICIMOD), wrzesień 2023. http://dx.doi.org/10.53055/icimod.1016.
Pełny tekst źródła