Auswahl der wissenschaftlichen Literatur zum Thema „Pretrained models“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Pretrained models" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Pretrained models"
Hofmann, Valentin, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert und Hinrich Schütze. „Geographic Adaptation of Pretrained Language Models“. Transactions of the Association for Computational Linguistics 12 (2024): 411–31. http://dx.doi.org/10.1162/tacl_a_00652.
Der volle Inhalt der QuelleBear Don’t Walk IV, Oliver J., Tony Sun, Adler Perotte und Noémie Elhadad. „Clinically relevant pretraining is all you need“. Journal of the American Medical Informatics Association 28, Nr. 9 (21.06.2021): 1970–76. http://dx.doi.org/10.1093/jamia/ocab086.
Der volle Inhalt der QuelleBasu, Sourya, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Vijil Chenthamarakshan, Kush R. Varshney, Lav R. Varshney und Payel Das. „Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 6 (26.06.2023): 6788–96. http://dx.doi.org/10.1609/aaai.v37i6.25832.
Der volle Inhalt der QuelleWang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang und Zhengyu Ju. „Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning“. Electronics 12, Nr. 16 (09.08.2023): 3391. http://dx.doi.org/10.3390/electronics12163391.
Der volle Inhalt der QuelleParmonangan, Ivan Halim, Marsella Marsella, Doharfen Frans Rino Pardede, Katarina Prisca Rijanto, Stephanie Stephanie, Kreshna Adhitya Chandra Kesuma, Valentina Tiara Cahyaningtyas und Maria Susan Anggreainy. „Training CNN-based Model on Low Resource Hardware and Small Dataset for Early Prediction of Melanoma from Skin Lesion Images“. Engineering, MAthematics and Computer Science (EMACS) Journal 5, Nr. 2 (31.05.2023): 41–46. http://dx.doi.org/10.21512/emacsjournal.v5i2.9904.
Der volle Inhalt der QuelleEdman, Lukas, Gabriele Sarti, Antonio Toral, Gertjan van Noord und Arianna Bisazza. „Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation“. Transactions of the Association for Computational Linguistics 12 (2024): 392–410. http://dx.doi.org/10.1162/tacl_a_00651.
Der volle Inhalt der QuelleWon, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim und Kang-Min Kim. „University Student Dropout Prediction Using Pretrained Language Models“. Applied Sciences 13, Nr. 12 (13.06.2023): 7073. http://dx.doi.org/10.3390/app13127073.
Der volle Inhalt der QuelleZhou, Shengchao, Gaofeng Meng, Zhaoxiang Zhang, Richard Yi Da Xu und Shiming Xiang. „Robust Feature Rectification of Pretrained Vision Models for Object Recognition“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 3 (26.06.2023): 3796–804. http://dx.doi.org/10.1609/aaai.v37i3.25492.
Der volle Inhalt der QuelleElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze und Yoav Goldberg. „Measuring and Improving Consistency in Pretrained Language Models“. Transactions of the Association for Computational Linguistics 9 (2021): 1012–31. http://dx.doi.org/10.1162/tacl_a_00410.
Der volle Inhalt der QuelleTakeoka, Kunihiro. „Low-resouce Taxonomy Enrichment with Pretrained Language Models“. Journal of Natural Language Processing 29, Nr. 1 (2022): 259–63. http://dx.doi.org/10.5715/jnlp.29.259.
Der volle Inhalt der QuelleDissertationen zum Thema "Pretrained models"
Neupane, Aashish. „Visual Saliency Analysis on Fashion Images Using Image Processing and Deep Learning Approaches“. OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2784.
Der volle Inhalt der QuellePelloin, Valentin. „La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés“. Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.
Der volle Inhalt der QuelleIn this thesis, spoken language understanding (SLU) is studied in the application context of telephone dialogues with defined goals (hotel booking reservations, for example). Historically, SLU was performed through a cascade of systems: a first system would transcribe the speech into words, and a natural language understanding system would link those words to a semantic annotation. The development of deep neural methods has led to the emergence of end-to-end architectures, where the understanding task is performed by a single system, applied directly to the speech signal to extract the semantic annotation. Recently, so-called self-supervised learning (SSL) pre-trained models have brought new advances in natural language processing (NLP). Learned in a generic way on very large datasets, they can then be adapted for other applications. To date, the best SLU results have been obtained with pipeline systems incorporating SSL models.However, none of the architectures, pipeline or end-to-end, is perfect. In this thesis, we study these architectures and propose hybrid versions that attempt to benefit from the advantages of each. After developing a state-of-the-art end-to-end SLU model, we evaluated different hybrid strategies. The advances made by SSL models during the course of this thesis led us to integrate them into our hybrid architecture
Kulhánek, Jonáš. „End-to-end dialogové systémy s předtrénovanými jazykovými modely“. Master's thesis, 2021. http://www.nusl.cz/ntk/nusl-448383.
Der volle Inhalt der QuelleBuchteile zum Thema "Pretrained models"
Gad, Ahmed Fawzy. „Deploying Pretrained Models“. In Practical Computer Vision Applications Using Deep Learning with CNNs, 295–338. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4167-7_7.
Der volle Inhalt der QuelleJain, Shashank Mohan. „Fine-Tuning Pretrained Models“. In Introduction to Transformers for NLP, 137–51. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8844-3_6.
Der volle Inhalt der QuelleSun, Kaili, Xudong Luo und Michael Y. Luo. „A Survey of Pretrained Language Models“. In Knowledge Science, Engineering and Management, 442–56. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10986-7_36.
Der volle Inhalt der QuelleSouza, Fábio, Rodrigo Nogueira und Roberto Lotufo. „BERTimbau: Pretrained BERT Models for Brazilian Portuguese“. In Intelligent Systems, 403–17. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61377-8_28.
Der volle Inhalt der QuelleSong, Yunfeng, Xiaochao Fan, Yong Yang, Ge Ren und Weiming Pan. „Large Pretrained Models on Multimodal Sentiment Analysis“. In Lecture Notes in Electrical Engineering, 506–13. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9423-3_63.
Der volle Inhalt der QuelleLovón-Melgarejo, Jesús, Jose G. Moreno, Romaric Besançon, Olivier Ferret und Lynda Tamine. „Probing Pretrained Language Models with Hierarchy Properties“. In Lecture Notes in Computer Science, 126–42. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-56060-6_9.
Der volle Inhalt der QuelleYarlagadda, Madhulika, Susrutha Ettimalla und Bhanu Sri Davuluri. „Zero-Shot Document Classification Using Pretrained Models“. In Multifaceted approaches for Data Acquisition, Processing & Communication, 104–10. London: CRC Press, 2024. http://dx.doi.org/10.1201/9781003470939-14.
Der volle Inhalt der QuelleTan, Zhen, Lu Cheng, Song Wang, Bo Yuan, Jundong Li und Huan Liu. „Interpreting Pretrained Language Models via Concept Bottlenecks“. In Advances in Knowledge Discovery and Data Mining, 56–74. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-2259-4_5.
Der volle Inhalt der QuelleHao, Kaifeng, Jianfeng Li, Cuiqin Hou, Xuexuan Wang und Pengyu Li. „Combining Pretrained and Graph Models for Text Classification“. In Communications in Computer and Information Science, 422–29. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92307-5_49.
Der volle Inhalt der QuelleNi, Bolin, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang und Haibin Ling. „Expanding Language-Image Pretrained Models for General Video Recognition“. In Lecture Notes in Computer Science, 1–18. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19772-7_1.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Pretrained models"
Zhang, Zhiyuan, Xiaoqian Liu, Yi Zhang, Qi Su, Xu Sun und Bin He. „Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models“. In Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.25.
Der volle Inhalt der QuelleChen, Catherine, Kevin Lin und Dan Klein. „Constructing Taxonomies from Pretrained Language Models“. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.373.
Der volle Inhalt der QuelleKoto, Fajri, Jey Han Lau und Timothy Baldwin. „Discourse Probing of Pretrained Language Models“. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.301.
Der volle Inhalt der QuelleZhou, Jingren. „Large-scale Multi-Modality Pretrained Models“. In MM '21: ACM Multimedia Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3474085.3480241.
Der volle Inhalt der QuelleDavison, Joe, Joshua Feldman und Alexander Rush. „Commonsense Knowledge Mining from Pretrained Models“. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-1109.
Der volle Inhalt der QuelleWeller, Orion, Marc Marone, Vladimir Braverman, Dawn Lawrie und Benjamin Van Durme. „Pretrained Models for Multilingual Federated Learning“. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.naacl-main.101.
Der volle Inhalt der QuelleTroshin, Sergey, und Nadezhda Chirkova. „Probing Pretrained Models of Source Codes“. In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.blackboxnlp-1.31.
Der volle Inhalt der QuelleDalvi, Fahim, Hassan Sajjad, Nadir Durrani und Yonatan Belinkov. „Analyzing Redundancy in Pretrained Transformer Models“. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.398.
Der volle Inhalt der QuelleTamkin, Alex, Trisha Singh, Davide Giovanardi und Noah Goodman. „Investigating Transferability in Pretrained Language Models“. In Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.125.
Der volle Inhalt der QuelleKurita, Keita, Paul Michel und Graham Neubig. „Weight Poisoning Attacks on Pretrained Models“. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.249.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Pretrained models"
Lohn, Andrew. Poison in the Well: Securing the Shared Resources of Machine Learning. Center for Security and Emerging Technology, Juni 2021. http://dx.doi.org/10.51593/2020ca013.
Der volle Inhalt der QuelleShrestha, Tanuja, Mir A. Matin, Vishwas Chitale und Samuel Thomas. Exploring the potential of deep learning for classifying camera trap data: A case study from Nepal - working paper. International Centre for Integrated Mountain Development (ICIMOD), September 2023. http://dx.doi.org/10.53055/icimod.1016.
Der volle Inhalt der Quelle