Zeitschriftenartikel zum Thema „Contextualized Language Models“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Contextualized Language Models" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
El Adlouni, Yassine, Noureddine En Nahnahi, Said Ouatik El Alaoui, Mohammed Meknassi, Horacio Rodríguez, and Nabil Alami. "Arabic Biomedical Community Question Answering Based on Contextualized Embeddings." International Journal of Intelligent Information Technologies 17, no. 3 (2021): 13–29. http://dx.doi.org/10.4018/ijiit.2021070102.
Der volle Inhalt der QuelleZhou, Xuhui, Yue Zhang, Leyang Cui, and Dandan Huang. "Evaluating Commonsense in Pre-Trained Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 9733–40. http://dx.doi.org/10.1609/aaai.v34i05.6523.
Der volle Inhalt der QuelleMyagmar, Batsergelen, Jie Li, and Shigetomo Kimura. "Cross-Domain Sentiment Classification With Bidirectional Contextualized Transformer Language Models." IEEE Access 7 (2019): 163219–30. http://dx.doi.org/10.1109/access.2019.2952360.
Der volle Inhalt der QuelleLi, Yichen, Yintong Huo, Renyi Zhong, et al. "Go Static: Contextualized Logging Statement Generation." Proceedings of the ACM on Software Engineering 1, FSE (2024): 609–30. http://dx.doi.org/10.1145/3643754.
Der volle Inhalt der QuelleYan, Huijiong, Tao Qian, Liang Xie, and Shanguang Chen. "Unsupervised cross-lingual model transfer for named entity recognition with contextualized word representations." PLOS ONE 16, no. 9 (2021): e0257230. http://dx.doi.org/10.1371/journal.pone.0257230.
Der volle Inhalt der QuelleXu, Yifei, Jingqiao Zhang, Ru He, et al. "SAS: Self-Augmentation Strategy for Language Model Pre-training." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 11586–94. http://dx.doi.org/10.1609/aaai.v36i10.21412.
Der volle Inhalt der QuelleCong, Yan. "AI Language Models: An Opportunity to Enhance Language Learning." Informatics 11, no. 3 (2024): 49. http://dx.doi.org/10.3390/informatics11030049.
Der volle Inhalt der QuelleZhang, Shuiliang, Hai Zhao, Junru Zhou, Xi Zhou, and Xiang Zhou. "Semantics-Aware Inferential Network for Natural Language Understanding." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (2021): 14437–45. http://dx.doi.org/10.1609/aaai.v35i16.17697.
Der volle Inhalt der QuelleSchumacher, Elliot, and Mark Dredze. "Learning unsupervised contextual representations for medical synonym discovery." JAMIA Open 2, no. 4 (2019): 538–46. http://dx.doi.org/10.1093/jamiaopen/ooz057.
Der volle Inhalt der QuelleZhang, Yuhan, Wenqi Chen, Ruihan Zhang, and Xiajie Zhang. "Representing affect information in word embeddings." Experiments in Linguistic Meaning 2 (January 27, 2023): 310. http://dx.doi.org/10.3765/elm.2.5391.
Der volle Inhalt der QuelleSchick, Timo, and Hinrich Schütze. "Rare Words: A Major Problem for Contextualized Embeddings and How to Fix it by Attentive Mimicking." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8766–74. http://dx.doi.org/10.1609/aaai.v34i05.6403.
Der volle Inhalt der QuelleGatto, Joseph, Madhusudan Basak, and Sarah Masud Preum. "Scope of Pre-trained Language Models for Detecting Conflicting Health Information." Proceedings of the International AAAI Conference on Web and Social Media 17 (June 2, 2023): 221–32. http://dx.doi.org/10.1609/icwsm.v17i1.22140.
Der volle Inhalt der QuelleDev, Sunipa, Tao Li, Jeff M. Phillips, and Vivek Srikumar. "On Measuring and Mitigating Biased Inferences of Word Embeddings." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 7659–66. http://dx.doi.org/10.1609/aaai.v34i05.6267.
Der volle Inhalt der QuelleAlshattnawi, Sawsan, Amani Shatnawi, Anas M. R. AlSobeh, and Aws A. Magableh. "Beyond Word-Based Model Embeddings: Contextualized Representations for Enhanced Social Media Spam Detection." Applied Sciences 14, no. 6 (2024): 2254. http://dx.doi.org/10.3390/app14062254.
Der volle Inhalt der QuelleLin, Guanjun, Heming Jia, and Di Wu. "Distilled and Contextualized Neural Models Benchmarked for Vulnerable Function Detection." Mathematics 10, no. 23 (2022): 4482. http://dx.doi.org/10.3390/math10234482.
Der volle Inhalt der QuelleCiucă, Ioana, and Yuan-Sen Ting. "Galactic ChitChat: Using Large Language Models to Converse with Astronomy Literature." Research Notes of the AAS 7, no. 9 (2023): 193. http://dx.doi.org/10.3847/2515-5172/acf85f.
Der volle Inhalt der QuelleChen, Zeming, and Qiyue Gao. "Probing Linguistic Information for Logical Inference in Pre-trained Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 10509–17. http://dx.doi.org/10.1609/aaai.v36i10.21294.
Der volle Inhalt der QuelleStrokach, Alexey, Tian Yu Lu, and Philip M. Kim. "ELASPIC2 (EL2): Combining Contextualized Language Models and Graph Neural Networks to Predict Effects of Mutations." Journal of Molecular Biology 433, no. 11 (2021): 166810. http://dx.doi.org/10.1016/j.jmb.2021.166810.
Der volle Inhalt der QuelleTheuner, Katharina, Tomas Mikael Elmgren, Axel Götling, Marvin Carl May, and Haluk Akay. "Weaving Knowledge Graphs and Large Language Models (LLMs): Leveraging Semantics for Contextualized Design Knowledge Retrieval." Procedia CIRP 134 (2025): 1125–30. https://doi.org/10.1016/j.procir.2025.03.073.
Der volle Inhalt der QuelleFlemotomos, Nikolaos, Victor R. Martinez, Zhuohao Chen, Torrey A. Creed, David C. Atkins, and Shrikanth Narayanan. "Automated quality assessment of cognitive behavioral therapy sessions through highly contextualized language representations." PLOS ONE 16, no. 10 (2021): e0258639. http://dx.doi.org/10.1371/journal.pone.0258639.
Der volle Inhalt der QuelleMurthy, G. S. N., K. Anshu, T. Srilakshmi, CH Sumanth, and M. Naga Mounika. "AI POWERED TRANSLATOR: TRANSFORMING NATURAL LANGUAGE TO DATABASE QUERIES." International Journal of Engineering Applied Sciences and Technology 09, no. 11 (2025): 39–43. https://doi.org/10.33564/ijeast.2025.v09i11.006.
Der volle Inhalt der QuelleTang, Xiaobin, Nuo Lei, Manru Dong, and Dan Ma. "Stock Price Prediction Based on Natural Language Processing1." Complexity 2022 (May 6, 2022): 1–15. http://dx.doi.org/10.1155/2022/9031900.
Der volle Inhalt der QuelleDavagdorj, Khishigsuren, Ling Wang, Meijing Li, Van-Huy Pham, Keun Ho Ryu, and Nippon Theera-Umpon. "Discovering Thematically Coherent Biomedical Documents Using Contextualized Bidirectional Encoder Representations from Transformers-Based Clustering." International Journal of Environmental Research and Public Health 19, no. 10 (2022): 5893. http://dx.doi.org/10.3390/ijerph19105893.
Der volle Inhalt der QuelleHuang, Jiayang, Yue Huang, David Yip, and Varvara Guljajeva. "Ephemera: Language as a Virus - AI-driven Interactive and Immersive Art Installation." Proceedings of the ACM on Computer Graphics and Interactive Techniques 7, no. 4 (2024): 1–8. http://dx.doi.org/10.1145/3664219.
Der volle Inhalt der QuelleBian, Yifan, Dennis Küster, Hui Liu, and Eva G. Krumhuber. "Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models." Sensors 24, no. 1 (2023): 126. http://dx.doi.org/10.3390/s24010126.
Der volle Inhalt der QuelleSoler, Aina Garí, Matthieu Labeau, and Chloé Clavel. "The Impact of Word Splitting on the Semantic Content of Contextualized Word Representations." Transactions of the Association for Computational Linguistics 12 (2024): 299–320. http://dx.doi.org/10.1162/tacl_a_00647.
Der volle Inhalt der QuelleAraujo, Vladimir, Marie-Francine Moens, and Alvaro Soto. "Learning Sentence-Level Representations with Predictive Coding." Machine Learning and Knowledge Extraction 5, no. 1 (2023): 59–77. http://dx.doi.org/10.3390/make5010005.
Der volle Inhalt der QuelleAl-Ghamdi, Sharefah, Hend Al-Khalifa, and Abdulmalik Al-Salman. "Fine-Tuning BERT-Based Pre-Trained Models for Arabic Dependency Parsing." Applied Sciences 13, no. 7 (2023): 4225. http://dx.doi.org/10.3390/app13074225.
Der volle Inhalt der QuelleQarah, Faisal, and Tawfeeq Alsanoosy. "A Comprehensive Analysis of Various Tokenizers for Arabic Large Language Models." Applied Sciences 14, no. 13 (2024): 5696. http://dx.doi.org/10.3390/app14135696.
Der volle Inhalt der QuelleSabbeh, Sahar F., and Heba A. Fasihuddin. "A Comparative Analysis of Word Embedding and Deep Learning for Arabic Sentiment Classification." Electronics 12, no. 6 (2023): 1425. http://dx.doi.org/10.3390/electronics12061425.
Der volle Inhalt der QuelleGarí Soler, Aina, and Marianna Apidianaki. "Let’s Play Mono-Poly: BERT Can Reveal Words’ Polysemy Level and Partitionability into Senses." Transactions of the Association for Computational Linguistics 9 (2021): 825–44. http://dx.doi.org/10.1162/tacl_a_00400.
Der volle Inhalt der QuelleZhurko, Dmytro, and Iryna Bilous. "Using word embedding models in natural language processing." Technical sciences and technologies, no. 1 (39) (May 22, 2025): 151–60. https://doi.org/10.25140/2411-5363-2025-1(39)-151-160.
Der volle Inhalt der QuelleZhang, Zhuosheng, Hai Zhao, Masao Utiyama, and Eiichiro Sumita. "Language Model Pre-training on True Negatives." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 14002–10. http://dx.doi.org/10.1609/aaai.v37i11.26639.
Der volle Inhalt der QuelleSaha, Koustuv, Ted Grover, Stephen M. Mattingly, et al. "Person-Centered Predictions of Psychological Constructs with Social Media Contextualized by Multimodal Sensing." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no. 1 (2021): 1–32. http://dx.doi.org/10.1145/3448117.
Der volle Inhalt der QuellePezzelle, Sandro, Ece Takmaz, and Raquel Fernández. "Word Representation Learning in Multimodal Pre-Trained Transformers: An Intrinsic Evaluation." Transactions of the Association for Computational Linguistics 9 (2021): 1563–79. http://dx.doi.org/10.1162/tacl_a_00443.
Der volle Inhalt der QuelleLin, Sheng-Chieh, Minghan Li, and Jimmy Lin. "Aggretriever: A Simple Approach to Aggregate Textual Representations for Robust Dense Passage Retrieval." Transactions of the Association for Computational Linguistics 11 (2023): 436–52. http://dx.doi.org/10.1162/tacl_a_00556.
Der volle Inhalt der QuelleYe, Yilin, Qian Zhu, Shishi Xiao, Kang Zhang, and Wei Zeng. "The Contemporary Art of Image Search: Iterative User Intent Expansion via Vision-Language Model." Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (2024): 1–31. http://dx.doi.org/10.1145/3641019.
Der volle Inhalt der QuelleZhang, Zhuosheng, Yuwei Wu, Hai Zhao, et al. "Semantics-Aware BERT for Language Understanding." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 9628–35. http://dx.doi.org/10.1609/aaai.v34i05.6510.
Der volle Inhalt der QuelleMercer, Sarah, and Peter D. MacIntyre. "Introducing positive psychology to SLA." Studies in Second Language Learning and Teaching 4, no. 2 (2014): 153–72. http://dx.doi.org/10.14746/ssllt.2014.4.2.2.
Der volle Inhalt der QuelleDragomir, Isabela-Anda, and Brânduşa-Oana Niculescu. "Packing and Unpacking Grammar – Towards a Communicative Approach to Teaching Language Structures." Scientific Bulletin 26, no. 2 (2021): 121–28. http://dx.doi.org/10.2478/bsaft-2021-0014.
Der volle Inhalt der QuelleZeng, Ziheng, and Suma Bhat. "Getting BART to Ride the Idiomatic Train: Learning to Represent Idiomatic Expressions." Transactions of the Association for Computational Linguistics 10 (2022): 1120–37. http://dx.doi.org/10.1162/tacl_a_00510.
Der volle Inhalt der QuelleBerlec, Tomaž, Marko Corn, Sergej Varljen, and Primož Podržaj. "Exploring Decentralized Warehouse Management Using Large Language Models: A Proof of Concept." Applied Sciences 15, no. 10 (2025): 5734. https://doi.org/10.3390/app15105734.
Der volle Inhalt der QuelleSyed, Muzamil Hussain, and Sun-Tae Chung. "MenuNER: Domain-Adapted BERT Based NER Approach for a Domain with Limited Dataset and Its Application to Food Menu Domain." Applied Sciences 11, no. 13 (2021): 6007. http://dx.doi.org/10.3390/app11136007.
Der volle Inhalt der QuelleŞahin, Gözde Gül. "To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP." Computational Linguistics 48, no. 1 (2022): 5–42. http://dx.doi.org/10.1162/coli_a_00425.
Der volle Inhalt der QuelleYoo, Yongseok. "Automated Think-Aloud Protocol for Identifying Students with Reading Comprehension Impairment Using Sentence Embedding." Applied Sciences 14, no. 2 (2024): 858. http://dx.doi.org/10.3390/app14020858.
Der volle Inhalt der QuelleKaraoğlan, Kürşat Mustafa. "Novel approaches for fake news detection based on attention-based deep multiple-instance learning using contextualized neural language models." Neurocomputing 602 (October 2024): 128263. http://dx.doi.org/10.1016/j.neucom.2024.128263.
Der volle Inhalt der QuelleMacAvaney, Sean, Sergey Feldman, Nazli Goharian, Doug Downey, and Arman Cohan. "ABNIRML: Analyzing the Behavior of Neural IR Models." Transactions of the Association for Computational Linguistics 10 (2022): 224–39. http://dx.doi.org/10.1162/tacl_a_00457.
Der volle Inhalt der QuelleLiu, Jiaqing, Chong Deng, Qinglin Zhang, et al. "Recording for Eyes, Not Echoing to Ears: Contextualized Spoken-to-Written Conversion of ASR Transcripts." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 23 (2025): 24623–31. https://doi.org/10.1609/aaai.v39i23.34642.
Der volle Inhalt der QuelleJouffroy, Jordan, Sarah F. Feldman, Ivan Lerner, Bastien Rance, Anita Burgun, and Antoine Neuraz. "Hybrid Deep Learning for Medication-Related Information Extraction From Clinical Texts in French: MedExt Algorithm Development Study." JMIR Medical Informatics 9, no. 3 (2021): e17934. http://dx.doi.org/10.2196/17934.
Der volle Inhalt der QuelleGamallo, Pablo. "Compositional Distributional Semantics with Syntactic Dependencies and Selectional Preferences." Applied Sciences 11, no. 12 (2021): 5743. http://dx.doi.org/10.3390/app11125743.
Der volle Inhalt der Quelle