Auswahl der wissenschaftlichen Literatur zum Thema „Contextualized Language Models“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Contextualized Language Models" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Contextualized Language Models"

1

El Adlouni, Yassine, Noureddine En Nahnahi, Said Ouatik El Alaoui, Mohammed Meknassi, Horacio Rodríguez, and Nabil Alami. "Arabic Biomedical Community Question Answering Based on Contextualized Embeddings." International Journal of Intelligent Information Technologies 17, no. 3 (2021): 13–29. http://dx.doi.org/10.4018/ijiit.2021070102.

Der volle Inhalt der Quelle
Annotation:
Community question answering has become increasingly important as they are practical for seeking and sharing information. Applying deep learning models often leads to good performance, but it requires an extensive amount of annotated data, a problem exacerbated for languages suffering a scarcity of resources. Contextualized language representation models have gained success due to promising results obtained on a wide array of downstream natural language processing tasks such as text classification, textual entailment, and paraphrase identification. This paper presents a novel approach by fine-
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhou, Xuhui, Yue Zhang, Leyang Cui, and Dandan Huang. "Evaluating Commonsense in Pre-Trained Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 9733–40. http://dx.doi.org/10.1609/aaai.v34i05.6523.

Der volle Inhalt der Quelle
Annotation:
Contextualized representations trained over large raw text data have given remarkable improvements for NLP tasks including question answering and reading comprehension. There have been works showing that syntactic, semantic and word sense knowledge are contained in such representations, which explains why they benefit such tasks. However, relatively little work has been done investigating commonsense knowledge contained in contextualized representations, which is crucial for human question answering and reading comprehension. We study the commonsense ability of GPT, BERT, XLNet, and RoBERTa by
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Myagmar, Batsergelen, Jie Li, and Shigetomo Kimura. "Cross-Domain Sentiment Classification With Bidirectional Contextualized Transformer Language Models." IEEE Access 7 (2019): 163219–30. http://dx.doi.org/10.1109/access.2019.2952360.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Li, Yichen, Yintong Huo, Renyi Zhong, et al. "Go Static: Contextualized Logging Statement Generation." Proceedings of the ACM on Software Engineering 1, FSE (2024): 609–30. http://dx.doi.org/10.1145/3643754.

Der volle Inhalt der Quelle
Annotation:
Logging practices have been extensively investigated to assist developers in writing appropriate logging statements for documenting software behaviors. Although numerous automatic logging approaches have been proposed, their performance remains unsatisfactory due to the constraint of the single-method input, without informative programming context outside the method. Specifically, we identify three inherent limitations with single-method context: limited static scope of logging statements, inconsistent logging styles, and missing type information of logging variables. To tackle these limitatio
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yan, Huijiong, Tao Qian, Liang Xie, and Shanguang Chen. "Unsupervised cross-lingual model transfer for named entity recognition with contextualized word representations." PLOS ONE 16, no. 9 (2021): e0257230. http://dx.doi.org/10.1371/journal.pone.0257230.

Der volle Inhalt der Quelle
Annotation:
Named entity recognition (NER) is one fundamental task in the natural language processing (NLP) community. Supervised neural network models based on contextualized word representations can achieve highly-competitive performance, which requires a large-scale manually-annotated corpus for training. While for the resource-scarce languages, the construction of such as corpus is always expensive and time-consuming. Thus, unsupervised cross-lingual transfer is one good solution to address the problem. In this work, we investigate the unsupervised cross-lingual NER with model transfer based on contex
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Xu, Yifei, Jingqiao Zhang, Ru He, et al. "SAS: Self-Augmentation Strategy for Language Model Pre-training." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 11586–94. http://dx.doi.org/10.1609/aaai.v36i10.21412.

Der volle Inhalt der Quelle
Annotation:
The core of self-supervised learning for pre-training language models includes pre-training task design as well as appropriate data augmentation. Most data augmentations in language model pre-training are context-independent. A seminal contextualized augmentation was recently proposed in ELECTRA and achieved state-of-the-art performance by introducing an auxiliary generation network (generator) to produce contextualized data augmentation for the training of a main discrimination network (discriminator). This design, however, introduces extra computation cost of the generator and a need to adju
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Cong, Yan. "AI Language Models: An Opportunity to Enhance Language Learning." Informatics 11, no. 3 (2024): 49. http://dx.doi.org/10.3390/informatics11030049.

Der volle Inhalt der Quelle
Annotation:
AI language models are increasingly transforming language research in various ways. How can language educators and researchers respond to the challenge posed by these AI models? Specifically, how can we embrace this technology to inform and enhance second language learning and teaching? In order to quantitatively characterize and index second language writing, the current work proposes the use of similarities derived from contextualized meaning representations in AI language models. The computational analysis in this work is hypothesis-driven. The current work predicts how similarities should
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhang, Shuiliang, Hai Zhao, Junru Zhou, Xi Zhou, and Xiang Zhou. "Semantics-Aware Inferential Network for Natural Language Understanding." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (2021): 14437–45. http://dx.doi.org/10.1609/aaai.v35i16.17697.

Der volle Inhalt der Quelle
Annotation:
For natural language understanding tasks, either machine reading comprehension or natural language inference, both semantics-aware and inference are favorable features of the concerned modeling for better understanding performance. Thus we propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation. Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues through an attention mechanism. By stringing these steps, the inferential network effectively learns to perform iterative reas
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Schumacher, Elliot, and Mark Dredze. "Learning unsupervised contextual representations for medical synonym discovery." JAMIA Open 2, no. 4 (2019): 538–46. http://dx.doi.org/10.1093/jamiaopen/ooz057.

Der volle Inhalt der Quelle
Annotation:
Abstract Objectives An important component of processing medical texts is the identification of synonymous words or phrases. Synonyms can inform learned representations of patients or improve linking mentioned concepts to medical ontologies. However, medical synonyms can be lexically similar (“dilated RA” and “dilated RV”) or dissimilar (“cerebrovascular accident” and “stroke”); contextual information can determine if 2 strings are synonymous. Medical professionals utilize extensive variation of medical terminology, often not evidenced in structured medical resources. Therefore, the ability to
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhang, Yuhan, Wenqi Chen, Ruihan Zhang, and Xiajie Zhang. "Representing affect information in word embeddings." Experiments in Linguistic Meaning 2 (January 27, 2023): 310. http://dx.doi.org/10.3765/elm.2.5391.

Der volle Inhalt der Quelle
Annotation:
A growing body of research in natural language processing (NLP) and natural language understanding (NLU) is investigating human-like knowledge learned or encoded in the word embeddings from large language models. This is a step towards understanding what knowledge language models capture that resembles human understanding of language and communication. Here, we investigated whether and how the affect meaning of a word (i.e., valence, arousal, dominance) is encoded in word embeddings pre-trained in large neural networks. We used the human-labeled dataset (Mohammad 2018) as the ground truth and
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Contextualized Language Models"

1

Borggren, Lukas. "Automatic Categorization of News Articles With Contextualized Language Models." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177004.

Der volle Inhalt der Quelle
Annotation:
This thesis investigates how pre-trained contextualized language models can be adapted for multi-label text classification of Swedish news articles. Various classifiers are built on pre-trained BERT and ELECTRA models, exploring global and local classifier approaches. Furthermore, the effects of domain specialization, using additional metadata features and model compression are investigated. Several hundred thousand news articles are gathered to create unlabeled and labeled datasets for pre-training and fine-tuning, respectively. The findings show that a local classifier approach is superior t
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Portnoff, Scott R. "(1) The case for using foreign language pedagogies in introductory computer programming instruction (2) A contextualized pre-AP computer programming curriculum| Models and simulations for exploring real-world cross-curricular topics." Thesis, California State University, Los Angeles, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10132126.

Der volle Inhalt der Quelle
Annotation:
<p>Large numbers of novice programmers have been failing postsecondary introductory computer science programming (CS1) courses for nearly four decades. Student learning is much worse in secondary programming courses of similar or even lesser rigor. This has critical implications for efforts to reclassify Computer Science (CS) as a core secondary subject. State departments of education have little incentive to do so until it can be demonstrated that most grade-level students will not only pass such classes, but will be well-prepared to succeed in subsequent vertically aligned coursework. </p><p
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Contextualized Language Models"

1

W, Glisan Eileen, ed. Teacher's handbook: Contextualized language instruction. 2nd ed. Heinle & Heinle, 2000.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

W, Glisan Eileen, ed. Teacher's handbook: Contextualized language instruction. 3rd ed. Thomson, Heinle, 2005.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

W, Glisan Eileen, ed. Teacher's handbook: Contextualized language instruction. 2nd ed. Heinle & Heinle, 2000.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Shrum, Judith L., and Eileen W. Glisan. Teacher's Handbook: Contextualized Language Instruction. 2nd ed. Heinle & Heinle Publishers, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shrum, Judith L., and Eileen W. Glisan. Teacher's Handbook: Contextualized Language Instruction. 3rd ed. Heinle, 2004.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Nelson, Eric S. Heidegger and Dao. Bloomsbury Publishing Plc, 2023. http://dx.doi.org/10.5040/9781350411937.

Der volle Inhalt der Quelle
Annotation:
In this innovative contribution, Eric S. Nelson offers a contextualized and systematic exploration of the Chinese sources and German language interpretations that shaped Heidegger's engagement with Daoism and his thinking of the thing, nothingness, and the freedom of releasement (Gelassenheit). Encompassing forgotten and recently published historical sources, including Heidegger's Daoist and Buddhist-related reflections in his lectures and notebooks, Nelson presents a critical intercultural reinterpretation of Heidegger's philosophical journey. Nelson analyzes the intersections and differences
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Balbo, Andrea. Traces of Actio in Fragmentary Roman Orators. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198788201.003.0014.

Der volle Inhalt der Quelle
Annotation:
Collecting references to the actio of fragmentary orators, this chapter explores the theoretical aspects of using the whole body (including both the speaker’s vocal delivery and his body language,) in public speaking. The evidence, despite its fragmentary nature, is contextualized using the advice given by the ancient rhetorical handbooks of Cicero and Quintilian on oratorical delivery, and related to modern theories of communication. The lack of a precise theoretical framework for actio in antiquity is argued to have allowed ancient theoreticians and practitioners considerable freedom for the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

McNaughton, James. “It all boils down to a question of words”. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198822547.003.0006.

Der volle Inhalt der Quelle
Annotation:
The Unnamable confronts inherited narrative and linguistic forms with the incommensurability of recent genocide. Initially, the book performs this inadequacy by confronting novel tropes with distorted images cribbed from memoirs of Mauthausen concentration camp. Then it updates surrealist treatments of Parisian abattoirs, asking whether industrialized slaughter is also the sign and fulfillment of modern genocide. The Unnamable also confuses literary production and the biopolitical aspirations of authoritarian politics: Beckett’s narrator writes from a conviction that language can become wholly
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Burke, Tony, and Brent Landau, eds. New Testament Apocrypha. Wm. B. Eerdmans Publishing Co., 2016. https://doi.org/10.5040/bci-0kbz.

Der volle Inhalt der Quelle
Annotation:
Compilation of little-known and never-before-published apocryphal Christian texts in English translation This anthology of ancient nonbiblical Christian literature presents informed introductions to and readable translations of a wide range of little-known apocryphal texts, most of which have never before been translated into any modern language. An introduction to the volume as a whole addresses the most significant features of the writings included and contextualizes them within the contemporary study of the Christian Apocrypha. The body of the book comprises thirty texts that have been care
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Walker, Katherine. Shakespeare and Science. Bloomsbury Publishing Plc, 2022. http://dx.doi.org/10.5040/9781350044654.

Der volle Inhalt der Quelle
Annotation:
With the recent turn to science studies and interdisciplinary research in Shakespearean scholarship, Shakespeare and Science: A Dictionary, provides a pedagogical resource for students and scholars. In charting Shakespeare’s engagement with natural philosophical discourse, this edition shapes the future of Shakespearean scholarship and pedagogy significantly, appealing to students entering the field and current scholars in interdisciplinary research on the topic alongside the non-professional reader seeking to understand Shakespeare’s language and early modern scientific practices. Shakespeare
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Buchteile zum Thema "Contextualized Language Models"

1

Gao, Ya, Shaoxiong Ji, Tongxuan Zhang, Prayag Tiwari, and Pekka Marttinen. "Contextualized Graph Embeddings for Adverse Drug Event Detection." In Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-26390-3_35.

Der volle Inhalt der Quelle
Annotation:
AbstractAn adverse drug event (ADE) is defined as an adverse reaction resulting from improper drug use, reported in various documents such as biomedical literature, drug reviews, and user posts on social media. The recent advances in natural language processing techniques have facilitated automated ADE detection from documents. However, the contextualized information and relations among text pieces are less explored. This paper investigates contextualized language models and heterogeneous graph representations. It builds a contextualized graph embedding model for adverse drug event detection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Li, Ruijia, Yiting Wang, Chanjin Zheng, Yuan-Hao Jiang, and Bo Jiang. "Generating Contextualized Mathematics Multiple-Choice Questions Utilizing Large Language Models." In Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-64315-6_48.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Santos, Joaquim, Henrique D. P. dos Santos, Fábio Tabalipa, and Renata Vieira. "De-Identification of Clinical Notes Using Contextualized Language Models and a Token Classifier." In Intelligent Systems. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-91699-2_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Straka, Milan, Jakub Náplava, Jana Straková, and David Samuel. "RobeCzech: Czech RoBERTa, a Monolingual Contextualized Language Representation Model." In Text, Speech, and Dialogue. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83527-9_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sarhan, Injy, and Marco R. Spruit. "Contextualized Word Embeddings in a Neural Open Information Extraction Model." In Natural Language Processing and Information Systems. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23281-8_31.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yan, Faren, Peng Yu, and Xin Chen. "LTNER: Large Language Model Tagging for Named Entity Recognition with Contextualized Entity Marking." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-78495-8_25.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ciula, Arianna, Øyvind Eide, Cristina Marras, and Patrick Sahle. "Model and Modelling in Digital Humanities." In Modelling Between Digital and Humanities. Open Book Publishers, 2023. http://dx.doi.org/10.11647/obp.0369.01.

Der volle Inhalt der Quelle
Annotation:
Chapter 1, Towards a new language for modelling, proposes a selection of lexical ramifications and a semantic excursus on the terms model/modelling. Some etymological reflections on these terms and selected occurrences in the Western history of thought are mapped out. The focus of the chapter is on the history and the polysemy characterising the terms model and modelling with the aim to offer some reflections on their current use, and to foreground the pragmatic elements implied by the concept of model in modelling practices and by the use of language (metalanguage). The underlying assumption
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Clossey, Luke. "10. Making Canon." In Jesus and the Making of the Modern Mind, 1380-1520. Open Book Publishers, 2024. http://dx.doi.org/10.11647/obp.0371.10.

Der volle Inhalt der Quelle
Annotation:
The first half of this chapter looks at how deep-ken value was added to Bibles and Qur'ans with calligraphy, decoration, and sacred language. The second half shows how there developed a more plain-ken attitude, which manifested in an acceptance of messy vernacular translations and two efficiency revolutions—(1) automation in Bible production through the printing press and (2) miniaturization in Qur'an production with the shift from the “perfect” muhaqqaq script to the workhorse naskh. In the plain ken, all meaning is constructed in history, by humans for humans, independent of other factors. I
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tran, Phuc, and Marina Tropmann-Frick. "Global Contextualized Representations: Enhancing Machine Reading Comprehension with Graph Neural Networks." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2025. https://doi.org/10.3233/faia241571.

Der volle Inhalt der Quelle
Annotation:
This paper introduces Global Contextualized Representations (GCoRe) – an extension for existing transformer-based language models. GCoRe addresses limitations in capturing global context and long-range dependencies by utilizing Graph Neural Networks for graph inference on a context graph constructed from the input text. Global contextualized features, derived from the context graph, are added to the token representations from the base language model. Experiment results show that GCoRe improves the performance of the baseline model (DeBERTa v3) by 0.57% on the HotpotQA dataset and by 0.15% on t
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Toddenroth, Dennis. "Classifiers of Medical Eponymy in Scientific Texts." In Caring is Sharing – Exploiting the Value in Data for Health and Innovation. IOS Press, 2023. http://dx.doi.org/10.3233/shti230271.

Der volle Inhalt der Quelle
Annotation:
Many concepts in the medical literature are named after persons. Frequent ambiguities and spelling varieties, however, complicate the automatic recognition of such eponyms with natural language processing (NLP) tools. Recently developed methods include word vectors and transformer models that incorporate context information into the downstream layers of a neural network architecture. To evaluate these models for classifying medical eponymy, we label eponyms and counterexamples mentioned in a convenience sample of 1,079 Pubmed abstracts, and fit logistic regression models to the vectors from th
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Contextualized Language Models"

1

Liétard, Bastien, Pascal Denis, and Mikaela Keller. "To Word Senses and Beyond: Inducing Concepts with Contextualized Language Models." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.156.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Park, Jun-Hyung, Mingyu Lee, Junho Kim, and SangKeun Lee. "Coconut: Contextualized Commonsense Unified Transformers for Graph-Based Commonsense Augmentation of Language Models." In Findings of the Association for Computational Linguistics ACL 2024. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-acl.346.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Thapa, Maya, Puneet Kapoor, Sakshi Kaushal, and Ishani Sharma. "A Review of Contextualized Word Embeddings and Pre-Trained Language Models, with a Focus on GPT and BERT." In International Conference on Cognitive & Cloud Computing. SCITEPRESS - Science and Technology Publications, 2024. https://doi.org/10.5220/0013305900004646.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mudgal, Ananya, Anshul Sharma, and Yugnanda Malhotra. "Bridging the Gap: Towards Contextualised Optical Character Recognition using Large Language Models." In Frontiers in Optics. Optica Publishing Group, 2024. https://doi.org/10.1364/fio.2024.jd4a.106.

Der volle Inhalt der Quelle
Annotation:
Optical Character Recognition is used to convert handwritten/printed text to digitised text, but it lacks refinement. We propose to integrate it with a Large Language Model, to create a context-driven word/sentence detection program.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shaghaghian, Shohreh, Luna Yue Feng, Borna Jafarpour, and Nicolai Pogrebnyakov. "Customizing Contextualized Language Models for Legal Document Reviews." In 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020. http://dx.doi.org/10.1109/bigdata50022.2020.9378201.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yagi, Sane Mo, Youssef Mansour, Firuz Kamalov, and Ashraf Elnagar. "Evaluation of Arabic-Based Contextualized Word Embedding Models." In 2021 International Conference on Asian Language Processing (IALP). IEEE, 2021. http://dx.doi.org/10.1109/ialp54817.2021.9675208.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Mansour, Youssef, and Ashraf Elnagar. "Sarcasm Detection in Arabic Text Using Contextualized Models." In 2023 International Conference on Asian Language Processing (IALP). IEEE, 2023. http://dx.doi.org/10.1109/ialp61005.2023.10337175.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ross, Hayley, Jonathon Cai, and Bonan Min. "Exploring Contextualized Neural Language Models for Temporal Dependency Parsing." In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.689.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Okamoto, Takayoshi, Tetsuya Honda, and Koji Eguchi. "Locally contextualized smoothing of language models for sentiment sentence retrieval." In Proceeding of the 1st international CIKM workshop. ACM Press, 2009. http://dx.doi.org/10.1145/1651461.1651475.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

He, Bin, Di Zhou, Jinghui Xiao, et al. "BERT-MK: Integrating Graph Contextualized Knowledge into Pre-trained Language Models." In Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.207.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!