Literatura académica sobre el tema "Pretrained models"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Pretrained models".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Pretrained models"
Hofmann, Valentin, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert y Hinrich Schütze. "Geographic Adaptation of Pretrained Language Models". Transactions of the Association for Computational Linguistics 12 (2024): 411–31. http://dx.doi.org/10.1162/tacl_a_00652.
Texto completoBear Don’t Walk IV, Oliver J., Tony Sun, Adler Perotte y Noémie Elhadad. "Clinically relevant pretraining is all you need". Journal of the American Medical Informatics Association 28, n.º 9 (21 de junio de 2021): 1970–76. http://dx.doi.org/10.1093/jamia/ocab086.
Texto completoBasu, Sourya, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Vijil Chenthamarakshan, Kush R. Varshney, Lav R. Varshney y Payel Das. "Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 6 (26 de junio de 2023): 6788–96. http://dx.doi.org/10.1609/aaai.v37i6.25832.
Texto completoWang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang y Zhengyu Ju. "Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning". Electronics 12, n.º 16 (9 de agosto de 2023): 3391. http://dx.doi.org/10.3390/electronics12163391.
Texto completoParmonangan, Ivan Halim, Marsella Marsella, Doharfen Frans Rino Pardede, Katarina Prisca Rijanto, Stephanie Stephanie, Kreshna Adhitya Chandra Kesuma, Valentina Tiara Cahyaningtyas y Maria Susan Anggreainy. "Training CNN-based Model on Low Resource Hardware and Small Dataset for Early Prediction of Melanoma from Skin Lesion Images". Engineering, MAthematics and Computer Science (EMACS) Journal 5, n.º 2 (31 de mayo de 2023): 41–46. http://dx.doi.org/10.21512/emacsjournal.v5i2.9904.
Texto completoEdman, Lukas, Gabriele Sarti, Antonio Toral, Gertjan van Noord y Arianna Bisazza. "Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation". Transactions of the Association for Computational Linguistics 12 (2024): 392–410. http://dx.doi.org/10.1162/tacl_a_00651.
Texto completoWon, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim y Kang-Min Kim. "University Student Dropout Prediction Using Pretrained Language Models". Applied Sciences 13, n.º 12 (13 de junio de 2023): 7073. http://dx.doi.org/10.3390/app13127073.
Texto completoZhou, Shengchao, Gaofeng Meng, Zhaoxiang Zhang, Richard Yi Da Xu y Shiming Xiang. "Robust Feature Rectification of Pretrained Vision Models for Object Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 3 (26 de junio de 2023): 3796–804. http://dx.doi.org/10.1609/aaai.v37i3.25492.
Texto completoElazar, Yanai, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze y Yoav Goldberg. "Measuring and Improving Consistency in Pretrained Language Models". Transactions of the Association for Computational Linguistics 9 (2021): 1012–31. http://dx.doi.org/10.1162/tacl_a_00410.
Texto completoTakeoka, Kunihiro. "Low-resouce Taxonomy Enrichment with Pretrained Language Models". Journal of Natural Language Processing 29, n.º 1 (2022): 259–63. http://dx.doi.org/10.5715/jnlp.29.259.
Texto completoTesis sobre el tema "Pretrained models"
Neupane, Aashish. "Visual Saliency Analysis on Fashion Images Using Image Processing and Deep Learning Approaches". OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2784.
Texto completoPelloin, Valentin. "La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés". Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.
Texto completoIn this thesis, spoken language understanding (SLU) is studied in the application context of telephone dialogues with defined goals (hotel booking reservations, for example). Historically, SLU was performed through a cascade of systems: a first system would transcribe the speech into words, and a natural language understanding system would link those words to a semantic annotation. The development of deep neural methods has led to the emergence of end-to-end architectures, where the understanding task is performed by a single system, applied directly to the speech signal to extract the semantic annotation. Recently, so-called self-supervised learning (SSL) pre-trained models have brought new advances in natural language processing (NLP). Learned in a generic way on very large datasets, they can then be adapted for other applications. To date, the best SLU results have been obtained with pipeline systems incorporating SSL models.However, none of the architectures, pipeline or end-to-end, is perfect. In this thesis, we study these architectures and propose hybrid versions that attempt to benefit from the advantages of each. After developing a state-of-the-art end-to-end SLU model, we evaluated different hybrid strategies. The advances made by SSL models during the course of this thesis led us to integrate them into our hybrid architecture
Kulhánek, Jonáš. "End-to-end dialogové systémy s předtrénovanými jazykovými modely". Master's thesis, 2021. http://www.nusl.cz/ntk/nusl-448383.
Texto completoCapítulos de libros sobre el tema "Pretrained models"
Gad, Ahmed Fawzy. "Deploying Pretrained Models". En Practical Computer Vision Applications Using Deep Learning with CNNs, 295–338. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4167-7_7.
Texto completoJain, Shashank Mohan. "Fine-Tuning Pretrained Models". En Introduction to Transformers for NLP, 137–51. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8844-3_6.
Texto completoSun, Kaili, Xudong Luo y Michael Y. Luo. "A Survey of Pretrained Language Models". En Knowledge Science, Engineering and Management, 442–56. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10986-7_36.
Texto completoSouza, Fábio, Rodrigo Nogueira y Roberto Lotufo. "BERTimbau: Pretrained BERT Models for Brazilian Portuguese". En Intelligent Systems, 403–17. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61377-8_28.
Texto completoSong, Yunfeng, Xiaochao Fan, Yong Yang, Ge Ren y Weiming Pan. "Large Pretrained Models on Multimodal Sentiment Analysis". En Lecture Notes in Electrical Engineering, 506–13. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9423-3_63.
Texto completoLovón-Melgarejo, Jesús, Jose G. Moreno, Romaric Besançon, Olivier Ferret y Lynda Tamine. "Probing Pretrained Language Models with Hierarchy Properties". En Lecture Notes in Computer Science, 126–42. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-56060-6_9.
Texto completoYarlagadda, Madhulika, Susrutha Ettimalla y Bhanu Sri Davuluri. "Zero-Shot Document Classification Using Pretrained Models". En Multifaceted approaches for Data Acquisition, Processing & Communication, 104–10. London: CRC Press, 2024. http://dx.doi.org/10.1201/9781003470939-14.
Texto completoTan, Zhen, Lu Cheng, Song Wang, Bo Yuan, Jundong Li y Huan Liu. "Interpreting Pretrained Language Models via Concept Bottlenecks". En Advances in Knowledge Discovery and Data Mining, 56–74. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-2259-4_5.
Texto completoHao, Kaifeng, Jianfeng Li, Cuiqin Hou, Xuexuan Wang y Pengyu Li. "Combining Pretrained and Graph Models for Text Classification". En Communications in Computer and Information Science, 422–29. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92307-5_49.
Texto completoNi, Bolin, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang y Haibin Ling. "Expanding Language-Image Pretrained Models for General Video Recognition". En Lecture Notes in Computer Science, 1–18. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19772-7_1.
Texto completoActas de conferencias sobre el tema "Pretrained models"
Zhang, Zhiyuan, Xiaoqian Liu, Yi Zhang, Qi Su, Xu Sun y Bin He. "Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models". En Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.25.
Texto completoChen, Catherine, Kevin Lin y Dan Klein. "Constructing Taxonomies from Pretrained Language Models". En Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.373.
Texto completoKoto, Fajri, Jey Han Lau y Timothy Baldwin. "Discourse Probing of Pretrained Language Models". En Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.301.
Texto completoZhou, Jingren. "Large-scale Multi-Modality Pretrained Models". En MM '21: ACM Multimedia Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3474085.3480241.
Texto completoDavison, Joe, Joshua Feldman y Alexander Rush. "Commonsense Knowledge Mining from Pretrained Models". En Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-1109.
Texto completoWeller, Orion, Marc Marone, Vladimir Braverman, Dawn Lawrie y Benjamin Van Durme. "Pretrained Models for Multilingual Federated Learning". En Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.naacl-main.101.
Texto completoTroshin, Sergey y Nadezhda Chirkova. "Probing Pretrained Models of Source Codes". En Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.blackboxnlp-1.31.
Texto completoDalvi, Fahim, Hassan Sajjad, Nadir Durrani y Yonatan Belinkov. "Analyzing Redundancy in Pretrained Transformer Models". En Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.398.
Texto completoTamkin, Alex, Trisha Singh, Davide Giovanardi y Noah Goodman. "Investigating Transferability in Pretrained Language Models". En Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.125.
Texto completoKurita, Keita, Paul Michel y Graham Neubig. "Weight Poisoning Attacks on Pretrained Models". En Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.249.
Texto completoInformes sobre el tema "Pretrained models"
Lohn, Andrew. Poison in the Well: Securing the Shared Resources of Machine Learning. Center for Security and Emerging Technology, junio de 2021. http://dx.doi.org/10.51593/2020ca013.
Texto completoShrestha, Tanuja, Mir A. Matin, Vishwas Chitale y Samuel Thomas. Exploring the potential of deep learning for classifying camera trap data: A case study from Nepal - working paper. International Centre for Integrated Mountain Development (ICIMOD), septiembre de 2023. http://dx.doi.org/10.53055/icimod.1016.
Texto completo