Literatura científica selecionada sobre o tema "Pretrained models"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Índice
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Pretrained models".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Pretrained models"
Hofmann, Valentin, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert e Hinrich Schütze. "Geographic Adaptation of Pretrained Language Models". Transactions of the Association for Computational Linguistics 12 (2024): 411–31. http://dx.doi.org/10.1162/tacl_a_00652.
Texto completo da fonteBear Don’t Walk IV, Oliver J., Tony Sun, Adler Perotte e Noémie Elhadad. "Clinically relevant pretraining is all you need". Journal of the American Medical Informatics Association 28, n.º 9 (21 de junho de 2021): 1970–76. http://dx.doi.org/10.1093/jamia/ocab086.
Texto completo da fonteBasu, Sourya, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Vijil Chenthamarakshan, Kush R. Varshney, Lav R. Varshney e Payel Das. "Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 6 (26 de junho de 2023): 6788–96. http://dx.doi.org/10.1609/aaai.v37i6.25832.
Texto completo da fonteWang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang e Zhengyu Ju. "Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning". Electronics 12, n.º 16 (9 de agosto de 2023): 3391. http://dx.doi.org/10.3390/electronics12163391.
Texto completo da fonteParmonangan, Ivan Halim, Marsella Marsella, Doharfen Frans Rino Pardede, Katarina Prisca Rijanto, Stephanie Stephanie, Kreshna Adhitya Chandra Kesuma, Valentina Tiara Cahyaningtyas e Maria Susan Anggreainy. "Training CNN-based Model on Low Resource Hardware and Small Dataset for Early Prediction of Melanoma from Skin Lesion Images". Engineering, MAthematics and Computer Science (EMACS) Journal 5, n.º 2 (31 de maio de 2023): 41–46. http://dx.doi.org/10.21512/emacsjournal.v5i2.9904.
Texto completo da fonteEdman, Lukas, Gabriele Sarti, Antonio Toral, Gertjan van Noord e Arianna Bisazza. "Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation". Transactions of the Association for Computational Linguistics 12 (2024): 392–410. http://dx.doi.org/10.1162/tacl_a_00651.
Texto completo da fonteWon, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim e Kang-Min Kim. "University Student Dropout Prediction Using Pretrained Language Models". Applied Sciences 13, n.º 12 (13 de junho de 2023): 7073. http://dx.doi.org/10.3390/app13127073.
Texto completo da fonteZhou, Shengchao, Gaofeng Meng, Zhaoxiang Zhang, Richard Yi Da Xu e Shiming Xiang. "Robust Feature Rectification of Pretrained Vision Models for Object Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 3 (26 de junho de 2023): 3796–804. http://dx.doi.org/10.1609/aaai.v37i3.25492.
Texto completo da fonteElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze e Yoav Goldberg. "Measuring and Improving Consistency in Pretrained Language Models". Transactions of the Association for Computational Linguistics 9 (2021): 1012–31. http://dx.doi.org/10.1162/tacl_a_00410.
Texto completo da fonteTakeoka, Kunihiro. "Low-resouce Taxonomy Enrichment with Pretrained Language Models". Journal of Natural Language Processing 29, n.º 1 (2022): 259–63. http://dx.doi.org/10.5715/jnlp.29.259.
Texto completo da fonteTeses / dissertações sobre o assunto "Pretrained models"
Neupane, Aashish. "Visual Saliency Analysis on Fashion Images Using Image Processing and Deep Learning Approaches". OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2784.
Texto completo da fontePelloin, Valentin. "La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés". Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.
Texto completo da fonteIn this thesis, spoken language understanding (SLU) is studied in the application context of telephone dialogues with defined goals (hotel booking reservations, for example). Historically, SLU was performed through a cascade of systems: a first system would transcribe the speech into words, and a natural language understanding system would link those words to a semantic annotation. The development of deep neural methods has led to the emergence of end-to-end architectures, where the understanding task is performed by a single system, applied directly to the speech signal to extract the semantic annotation. Recently, so-called self-supervised learning (SSL) pre-trained models have brought new advances in natural language processing (NLP). Learned in a generic way on very large datasets, they can then be adapted for other applications. To date, the best SLU results have been obtained with pipeline systems incorporating SSL models.However, none of the architectures, pipeline or end-to-end, is perfect. In this thesis, we study these architectures and propose hybrid versions that attempt to benefit from the advantages of each. After developing a state-of-the-art end-to-end SLU model, we evaluated different hybrid strategies. The advances made by SSL models during the course of this thesis led us to integrate them into our hybrid architecture
Kulhánek, Jonáš. "End-to-end dialogové systémy s předtrénovanými jazykovými modely". Master's thesis, 2021. http://www.nusl.cz/ntk/nusl-448383.
Texto completo da fonteCapítulos de livros sobre o assunto "Pretrained models"
Gad, Ahmed Fawzy. "Deploying Pretrained Models". In Practical Computer Vision Applications Using Deep Learning with CNNs, 295–338. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4167-7_7.
Texto completo da fonteJain, Shashank Mohan. "Fine-Tuning Pretrained Models". In Introduction to Transformers for NLP, 137–51. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8844-3_6.
Texto completo da fonteSun, Kaili, Xudong Luo e Michael Y. Luo. "A Survey of Pretrained Language Models". In Knowledge Science, Engineering and Management, 442–56. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10986-7_36.
Texto completo da fonteSouza, Fábio, Rodrigo Nogueira e Roberto Lotufo. "BERTimbau: Pretrained BERT Models for Brazilian Portuguese". In Intelligent Systems, 403–17. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61377-8_28.
Texto completo da fonteSong, Yunfeng, Xiaochao Fan, Yong Yang, Ge Ren e Weiming Pan. "Large Pretrained Models on Multimodal Sentiment Analysis". In Lecture Notes in Electrical Engineering, 506–13. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9423-3_63.
Texto completo da fonteLovón-Melgarejo, Jesús, Jose G. Moreno, Romaric Besançon, Olivier Ferret e Lynda Tamine. "Probing Pretrained Language Models with Hierarchy Properties". In Lecture Notes in Computer Science, 126–42. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-56060-6_9.
Texto completo da fonteYarlagadda, Madhulika, Susrutha Ettimalla e Bhanu Sri Davuluri. "Zero-Shot Document Classification Using Pretrained Models". In Multifaceted approaches for Data Acquisition, Processing & Communication, 104–10. London: CRC Press, 2024. http://dx.doi.org/10.1201/9781003470939-14.
Texto completo da fonteTan, Zhen, Lu Cheng, Song Wang, Bo Yuan, Jundong Li e Huan Liu. "Interpreting Pretrained Language Models via Concept Bottlenecks". In Advances in Knowledge Discovery and Data Mining, 56–74. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-2259-4_5.
Texto completo da fonteHao, Kaifeng, Jianfeng Li, Cuiqin Hou, Xuexuan Wang e Pengyu Li. "Combining Pretrained and Graph Models for Text Classification". In Communications in Computer and Information Science, 422–29. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92307-5_49.
Texto completo da fonteNi, Bolin, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang e Haibin Ling. "Expanding Language-Image Pretrained Models for General Video Recognition". In Lecture Notes in Computer Science, 1–18. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19772-7_1.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Pretrained models"
Zhang, Zhiyuan, Xiaoqian Liu, Yi Zhang, Qi Su, Xu Sun e Bin He. "Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models". In Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.25.
Texto completo da fonteChen, Catherine, Kevin Lin e Dan Klein. "Constructing Taxonomies from Pretrained Language Models". In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.373.
Texto completo da fonteKoto, Fajri, Jey Han Lau e Timothy Baldwin. "Discourse Probing of Pretrained Language Models". In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.301.
Texto completo da fonteZhou, Jingren. "Large-scale Multi-Modality Pretrained Models". In MM '21: ACM Multimedia Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3474085.3480241.
Texto completo da fonteDavison, Joe, Joshua Feldman e Alexander Rush. "Commonsense Knowledge Mining from Pretrained Models". In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-1109.
Texto completo da fonteWeller, Orion, Marc Marone, Vladimir Braverman, Dawn Lawrie e Benjamin Van Durme. "Pretrained Models for Multilingual Federated Learning". In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.naacl-main.101.
Texto completo da fonteTroshin, Sergey, e Nadezhda Chirkova. "Probing Pretrained Models of Source Codes". In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.blackboxnlp-1.31.
Texto completo da fonteDalvi, Fahim, Hassan Sajjad, Nadir Durrani e Yonatan Belinkov. "Analyzing Redundancy in Pretrained Transformer Models". In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.398.
Texto completo da fonteTamkin, Alex, Trisha Singh, Davide Giovanardi e Noah Goodman. "Investigating Transferability in Pretrained Language Models". In Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.125.
Texto completo da fonteKurita, Keita, Paul Michel e Graham Neubig. "Weight Poisoning Attacks on Pretrained Models". In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.249.
Texto completo da fonteRelatórios de organizações sobre o assunto "Pretrained models"
Lohn, Andrew. Poison in the Well: Securing the Shared Resources of Machine Learning. Center for Security and Emerging Technology, junho de 2021. http://dx.doi.org/10.51593/2020ca013.
Texto completo da fonteShrestha, Tanuja, Mir A. Matin, Vishwas Chitale e Samuel Thomas. Exploring the potential of deep learning for classifying camera trap data: A case study from Nepal - working paper. International Centre for Integrated Mountain Development (ICIMOD), setembro de 2023. http://dx.doi.org/10.53055/icimod.1016.
Texto completo da fonte