Auswahl der wissenschaftlichen Literatur zum Thema „Contextualized Language Models“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Contextualized Language Models" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Contextualized Language Models"

1

Myagmar, Batsergelen, Jie Li und Shigetomo Kimura. „Cross-Domain Sentiment Classification With Bidirectional Contextualized Transformer Language Models“. IEEE Access 7 (2019): 163219–30. http://dx.doi.org/10.1109/access.2019.2952360.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

El Adlouni, Yassine, Noureddine En Nahnahi, Said Ouatik El Alaoui, Mohammed Meknassi, Horacio Rodríguez und Nabil Alami. „Arabic Biomedical Community Question Answering Based on Contextualized Embeddings“. International Journal of Intelligent Information Technologies 17, Nr. 3 (Juli 2021): 13–29. http://dx.doi.org/10.4018/ijiit.2021070102.

Der volle Inhalt der Quelle
Annotation:
Community question answering has become increasingly important as they are practical for seeking and sharing information. Applying deep learning models often leads to good performance, but it requires an extensive amount of annotated data, a problem exacerbated for languages suffering a scarcity of resources. Contextualized language representation models have gained success due to promising results obtained on a wide array of downstream natural language processing tasks such as text classification, textual entailment, and paraphrase identification. This paper presents a novel approach by fine-tuning contextualized embeddings for a medical domain community question answering task. The authors propose an architecture combining two neural models powered by pre-trained contextual embeddings to learn a sentence representation and thereafter fine-tuned on the task to compute a score used for both ranking and classification. The experimental results on SemEval Task 3 CQA show that the model significantly outperforms the state-of-the-art models by almost 2% for the '16 edition and 1% for the '17 edition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhou, Xuhui, Yue Zhang, Leyang Cui und Dandan Huang. „Evaluating Commonsense in Pre-Trained Language Models“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 9733–40. http://dx.doi.org/10.1609/aaai.v34i05.6523.

Der volle Inhalt der Quelle
Annotation:
Contextualized representations trained over large raw text data have given remarkable improvements for NLP tasks including question answering and reading comprehension. There have been works showing that syntactic, semantic and word sense knowledge are contained in such representations, which explains why they benefit such tasks. However, relatively little work has been done investigating commonsense knowledge contained in contextualized representations, which is crucial for human question answering and reading comprehension. We study the commonsense ability of GPT, BERT, XLNet, and RoBERTa by testing them on seven challenging benchmarks, finding that language modeling and its variants are effective objectives for promoting models' commonsense ability while bi-directional context and larger training set are bonuses. We additionally find that current models do poorly on tasks require more necessary inference steps. Finally, we test the robustness of models by making dual test cases, which are correlated so that the correct prediction of one sample should lead to correct prediction of the other. Interestingly, the models show confusion on these test cases, which suggests that they learn commonsense at the surface rather than the deep level. We release a test set, named CATs publicly, for future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Yan, Huijiong, Tao Qian, Liang Xie und Shanguang Chen. „Unsupervised cross-lingual model transfer for named entity recognition with contextualized word representations“. PLOS ONE 16, Nr. 9 (21.09.2021): e0257230. http://dx.doi.org/10.1371/journal.pone.0257230.

Der volle Inhalt der Quelle
Annotation:
Named entity recognition (NER) is one fundamental task in the natural language processing (NLP) community. Supervised neural network models based on contextualized word representations can achieve highly-competitive performance, which requires a large-scale manually-annotated corpus for training. While for the resource-scarce languages, the construction of such as corpus is always expensive and time-consuming. Thus, unsupervised cross-lingual transfer is one good solution to address the problem. In this work, we investigate the unsupervised cross-lingual NER with model transfer based on contextualized word representations, which greatly advances the cross-lingual NER performance. We study several model transfer settings of the unsupervised cross-lingual NER, including (1) different types of the pretrained transformer-based language models as input, (2) the exploration strategies of the multilingual contextualized word representations, and (3) multi-source adaption. In particular, we propose an adapter-based word representation method combining with parameter generation network (PGN) better to capture the relationship between the source and target languages. We conduct experiments on a benchmark ConLL dataset involving four languages to simulate the cross-lingual setting. Results show that we can obtain highly-competitive performance by cross-lingual model transfer. In particular, our proposed adapter-based PGN model can lead to significant improvements for cross-lingual NER.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Schumacher, Elliot, und Mark Dredze. „Learning unsupervised contextual representations for medical synonym discovery“. JAMIA Open 2, Nr. 4 (04.11.2019): 538–46. http://dx.doi.org/10.1093/jamiaopen/ooz057.

Der volle Inhalt der Quelle
Annotation:
Abstract Objectives An important component of processing medical texts is the identification of synonymous words or phrases. Synonyms can inform learned representations of patients or improve linking mentioned concepts to medical ontologies. However, medical synonyms can be lexically similar (“dilated RA” and “dilated RV”) or dissimilar (“cerebrovascular accident” and “stroke”); contextual information can determine if 2 strings are synonymous. Medical professionals utilize extensive variation of medical terminology, often not evidenced in structured medical resources. Therefore, the ability to discover synonyms, especially without reliance on training data, is an important component in processing training notes. The ability to discover synonyms from models trained on large amounts of unannotated data removes the need to rely on annotated pairs of similar words. Models relying solely on non-annotated data can be trained on a wider variety of texts without the cost of annotation, and thus may capture a broader variety of language. Materials and Methods Recent contextualized deep learning representation models, such as ELMo (Peters et al., 2019) and BERT, (Devlin et al. 2019) have shown strong improvements over previous approaches in a broad variety of tasks. We leverage these contextualized deep learning models to build representations of synonyms, which integrate the context of surrounding sentence and use character-level models to alleviate out-of-vocabulary issues. Using these models, we perform unsupervised discovery of likely synonym matches, which reduces the reliance on expensive training data. Results We use the ShARe/CLEF eHealth Evaluation Lab 2013 Task 1b data to evaluate our synonym discovery method. Comparing our proposed contextualized deep learning representations to previous non-neural representations, we find that the contextualized representations show consistent improvement over non-contextualized models in all metrics. Conclusions Our results show that contextualized models produce effective representations for synonym discovery. We expect that the use of these representations in other tasks would produce similar gains in performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Schick, Timo, und Hinrich Schütze. „Rare Words: A Major Problem for Contextualized Embeddings and How to Fix it by Attentive Mimicking“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 8766–74. http://dx.doi.org/10.1609/aaai.v34i05.6403.

Der volle Inhalt der Quelle
Annotation:
Pretraining deep neural network architectures with a language modeling objective has brought large improvements for many natural language processing tasks. Exemplified by BERT, a recently proposed such architecture, we demonstrate that despite being trained on huge amounts of data, deep language models still struggle to understand rare words. To fix this problem, we adapt Attentive Mimicking, a method that was designed to explicitly learn embeddings for rare words, to deep language models. In order to make this possible, we introduce one-token approximation, a procedure that enables us to use Attentive Mimicking even when the underlying language model uses subword-based tokenization, i.e., it does not assign embeddings to all words. To evaluate our method, we create a novel dataset that tests the ability of language models to capture semantic properties of words without any task-specific fine-tuning. Using this dataset, we show that adding our adapted version of Attentive Mimicking to BERT does substantially improve its understanding of rare words.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Strokach, Alexey, Tian Yu Lu und Philip M. Kim. „ELASPIC2 (EL2): Combining Contextualized Language Models and Graph Neural Networks to Predict Effects of Mutations“. Journal of Molecular Biology 433, Nr. 11 (Mai 2021): 166810. http://dx.doi.org/10.1016/j.jmb.2021.166810.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Dev, Sunipa, Tao Li, Jeff M. Phillips und Vivek Srikumar. „On Measuring and Mitigating Biased Inferences of Word Embeddings“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 7659–66. http://dx.doi.org/10.1609/aaai.v34i05.6267.

Der volle Inhalt der Quelle
Annotation:
Word embeddings carry stereotypical connotations from the text they are trained on, which can lead to invalid inferences in downstream models that rely on them. We use this observation to design a mechanism for measuring stereotypes using the task of natural language inference. We demonstrate a reduction in invalid inferences via bias mitigation strategies on static word embeddings (GloVe). Further, we show that for gender bias, these techniques extend to contextualized embeddings when applied selectively only to the static components of contextualized embeddings (ELMo, BERT).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Garí Soler, Aina, und Marianna Apidianaki. „Let’s Play Mono-Poly: BERT Can Reveal Words’ Polysemy Level and Partitionability into Senses“. Transactions of the Association for Computational Linguistics 9 (2021): 825–44. http://dx.doi.org/10.1162/tacl_a_00400.

Der volle Inhalt der Quelle
Annotation:
Pre-trained language models (LMs) encode rich information about linguistic structure but their knowledge about lexical polysemy remains unclear. We propose a novel experimental setup for analyzing this knowledge in LMs specifically trained for different languages (English, French, Spanish, and Greek) and in multilingual BERT. We perform our analysis on datasets carefully designed to reflect different sense distributions, and control for parameters that are highly correlated with polysemy such as frequency and grammatical category. We demonstrate that BERT-derived representations reflect words’ polysemy level and their partitionability into senses. Polysemy-related information is more clearly present in English BERT embeddings, but models in other languages also manage to establish relevant distinctions between words at different polysemy levels. Our results contribute to a better understanding of the knowledge encoded in contextualized representations and open up new avenues for multilingual lexical semantics research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Saha, Koustuv, Ted Grover, Stephen M. Mattingly, Vedant Das swain, Pranshu Gupta, Gonzalo J. Martinez, Pablo Robles-Granda, Gloria Mark, Aaron Striegel und Munmun De Choudhury. „Person-Centered Predictions of Psychological Constructs with Social Media Contextualized by Multimodal Sensing“. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, Nr. 1 (19.03.2021): 1–32. http://dx.doi.org/10.1145/3448117.

Der volle Inhalt der Quelle
Annotation:
Personalized predictions have shown promises in various disciplines but they are fundamentally constrained in their ability to generalize across individuals. These models are often trained on limited datasets which do not represent the fluidity of human functioning. In contrast, generalized models capture normative behaviors between individuals but lack precision in predicting individual outcomes. This paper aims to balance the tradeoff between one-for-each and one-for-all models by clustering individuals on mutable behaviors and conducting cluster-specific predictions of psychological constructs in a multimodal sensing dataset of 754 individuals. Specifically, we situate our modeling on social media that has exhibited capability in inferring psychosocial attributes. We hypothesize that complementing social media data with offline sensor data can help to personalize and improve predictions. We cluster individuals on physical behaviors captured via Bluetooth, wearables, and smartphone sensors. We build contextualized models predicting psychological constructs trained on each cluster's social media data and compare their performance against generalized models trained on all individuals' data. The comparison reveals no difference in predicting affect and a decline in predicting cognitive ability, but an improvement in predicting personality, anxiety, and sleep quality. We construe that our approach improves predicting psychological constructs sharing theoretical associations with physical behavior. We also find how social media language associates with offline behavioral contextualization. Our work bears implications in understanding the nuanced strengths and weaknesses of personalized predictions, and how the effectiveness may vary by multiple factors. This work reveals the importance of taking a critical stance on evaluating the effectiveness before investing efforts in personalization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Contextualized Language Models"

1

Borggren, Lukas. „Automatic Categorization of News Articles With Contextualized Language Models“. Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177004.

Der volle Inhalt der Quelle
Annotation:
This thesis investigates how pre-trained contextualized language models can be adapted for multi-label text classification of Swedish news articles. Various classifiers are built on pre-trained BERT and ELECTRA models, exploring global and local classifier approaches. Furthermore, the effects of domain specialization, using additional metadata features and model compression are investigated. Several hundred thousand news articles are gathered to create unlabeled and labeled datasets for pre-training and fine-tuning, respectively. The findings show that a local classifier approach is superior to a global classifier approach and that BERT outperforms ELECTRA significantly. Notably, a baseline classifier built on SVMs yields competitive performance. The effect of further in-domain pre-training varies; ELECTRA’s performance improves while BERT’s is largely unaffected. It is found that utilizing metadata features in combination with text representations improves performance. Both BERT and ELECTRA exhibit robustness to quantization and pruning, allowing model sizes to be cut in half without any performance loss.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Portnoff, Scott R. „(1) The case for using foreign language pedagogies in introductory computer programming instruction (2) A contextualized pre-AP computer programming curriculum| Models and simulations for exploring real-world cross-curricular topics“. Thesis, California State University, Los Angeles, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10132126.

Der volle Inhalt der Quelle
Annotation:

Large numbers of novice programmers have been failing postsecondary introductory computer science programming (CS1) courses for nearly four decades. Student learning is much worse in secondary programming courses of similar or even lesser rigor. This has critical implications for efforts to reclassify Computer Science (CS) as a core secondary subject. State departments of education have little incentive to do so until it can be demonstrated that most grade-level students will not only pass such classes, but will be well-prepared to succeed in subsequent vertically aligned coursework.

One rarely considered cause for such massive failure is insufficient pedagogic attention to teaching a programming language (PL) as a language, per se. Students who struggle with acquiring proficiency in using a PL can be likened to students who flounder in a French class due to a poor grasp of the language's syntactic or semantic features. Though natural languages (NL) and PLs differ in many key respects, a recently reported (2014) fMRI study has demonstrated that comprehension of computer programs primarily utilizes regions of the brain involved in language processing, not math. The implications for CS pedagogy are that, if PLs are learned in ways fundamentally similar to how second languages (L2) are acquired, foreign language pedagogies (FLP) and second language acquisition (SLA) theories can be key sources for informing the crafting of effective PL teaching strategies.

In this regard, key features of contemporary L2 pedagogies relevant to effective PL instruction—reflecting the late 20th-century shift in emphasis from cognitive learning that stressed grammatical knowledge, to one that facilitates communication and practical uses of the language—are: (1) repetitive and comprehensible input in a variety of contexts, and (2) motivated, meaningful communication and interaction.

Informed by these principles, four language-based strategies adapted for PL instruction are described, the first to help students acquire syntax and three others for learning semantics: (a) memorization; (b) setting components in relief; (c) transformations; and (d) ongoing exposure.

Anecdotal observations in my classroom have long indicated that memorization of small programs and program fragments can immediately and drastically reduce the occurrence of syntax errors among novice pre-AP Java programming students. A modest first experiment attempting to confirm the effect was statistically unconvincing: for students most likely to struggle, the Pearson coefficient of −0.474 (p < 0.064) suggested a low-modest inverse correlation. A follow-up study will be better designed. Still, a possible explanation for the anecdotal phenomenon is that the repetition required for proficient memorization activates the same subconscious language acquisition processes that construct NL grammars when learners are exposed to language data.

Dismal retention rates subsequent to the introductory programming course have historically also been a persistent problem. One key factor impacting attrition is a student's intrinsic motivation, which is shaped both by interest in, and self-efficacy with regards to, the subject matter. Interest involves not just CS concepts, but also context, the domains used to illustrate how one can apply those concepts. One way to tap into a wide range of student interests is to demonstrate the capacity of CS to explore, model, simulate and solve non-trivial problems in domains across the academic spectrum, fields that students already value and whose basic concepts they already understand.

An original University of California "G" elective (UCOP "a-g" approved) pre-AP programming course along these principles is described. In this graphics-based Processing course, students are guided through the process of writing and studying small dynamic art programs, progressing to mid-size graphics programs that model or simulate real-world problems and phenomena in geography, biology, political science and astronomy. The contextualized course content combined with the language-specific strategies outlined above address both interest and self-efficacy. Although anecdotally these appear to have a positive effect on student understanding and retention, studies need to be done on a larger scale to validate these outcomes.

Finally, a critique is offered of the movement to replace rigorous secondary programming instruction with survey courses—particularly Exploring Computer Science and APCS Principles—under the guise of "democratizing" secondary CS education or to address the severe and persistent demographic disparities. This group of educators has promulgated a nonsensical fiction that programming is simply one of many subdisciplines of the field, rather than the core skill needed to understand all other CS topics in any deep and meaningful way. These courses present a facade of mitigating demographic disparities, but leave participants no better prepared for subsequent CS study.

APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Contextualized Language Models"

1

W, Glisan Eileen, Hrsg. Teacher's handbook: Contextualized language instruction. 2. Aufl. Boston, Mass: Heinle & Heinle, 2000.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

W, Glisan Eileen, Hrsg. Teacher's handbook: Contextualized language instruction. 2. Aufl. Boston, Mass: Heinle & Heinle, 2000.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

W, Glisan Eileen, Hrsg. Teacher's handbook: Contextualized language instruction. 3. Aufl. Southbank, Victoria, Australia: Thomson, Heinle, 2005.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Shrum, Judith L., und Eileen W. Glisan. Teacher's Handbook: Contextualized Language Instruction. 3. Aufl. Heinle, 2004.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shrum, Judith L., und Eileen W. Glisan. Teacher's Handbook: Contextualized Language Instruction. 2. Aufl. Heinle & Heinle Publishers, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Balbo, Andrea. Traces of Actio in Fragmentary Roman Orators. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198788201.003.0014.

Der volle Inhalt der Quelle
Annotation:
Collecting references to the actio of fragmentary orators, this chapter explores the theoretical aspects of using the whole body (including both the speaker’s vocal delivery and his body language,) in public speaking. The evidence, despite its fragmentary nature, is contextualized using the advice given by the ancient rhetorical handbooks of Cicero and Quintilian on oratorical delivery, and related to modern theories of communication. The lack of a precise theoretical framework for actio in antiquity is argued to have allowed ancient theoreticians and practitioners considerable freedom for the representation and adoption of various elements of non-vocal delivery in their treatises and speeches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

McNaughton, James. “It all boils down to a question of words”. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198822547.003.0006.

Der volle Inhalt der Quelle
Annotation:
The Unnamable confronts inherited narrative and linguistic forms with the incommensurability of recent genocide. Initially, the book performs this inadequacy by confronting novel tropes with distorted images cribbed from memoirs of Mauthausen concentration camp. Then it updates surrealist treatments of Parisian abattoirs, asking whether industrialized slaughter is also the sign and fulfillment of modern genocide. The Unnamable also confuses literary production and the biopolitical aspirations of authoritarian politics: Beckett’s narrator writes from a conviction that language can become wholly performative and has the capacity to incarnate and to kill. The narrator attempts to deconstruct language, but doing so ironically transcends literary and philosophical problems to reveal historiographical problems as well, the missing voices of those killed without trace. The chapter ends with a theoretical coda that productively contextualizes Beckett’s strategy with historiographical debate about narrative and genocide by Paul Ricoeur, Giorgio Agamben, Hayden White, and others.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Elior, Rachel. Jewish Mysticism. Übersetzt von Arthur B. Millman. Liverpool University Press, 2007. http://dx.doi.org/10.3828/liverpool/9781874774679.001.0001.

Der volle Inhalt der Quelle
Annotation:
Mysticism is one of the central sources of inspiration of religious thought. It is an attempt to decode the mystery of divine existence by penetrating to the depths of consciousness through language, memory, myth, and symbolism. By offering an alternative perspective on the world that gives expression to yearnings for freedom and change, mysticism engenders new modes of authority and leadership; as such it plays a decisive role in moulding religious and social history. For all these reasons, the mystical corpus deserves study and discussion in the framework of cultural criticism and research. This book is a lyrical exposition of the Jewish mystical phenomenon. Its purpose is to present the meanings of the mystical works as they were perceived by their creators and readers. At the same time, it contextualizes them within the boundaries of the religion, culture, language, and spiritual and historical circumstances in which the destiny of the Jewish people has evolved. The book conveys the richness of the mystical experience in discovering the infinity of meaning embedded in the sacred text and explains the multivalent symbols. It illustrates the varieties of the mystical experience from antiquity to the twentieth century. The translations of texts communicate the mystical experiences vividly and make it easy for the reader to understand how the book uses them to explain the relationship between the revealed world and the hidden world and between the mystical world and the traditional religious world, with all the social and religious tensions this has caused.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Jones, Chris. Fossil Poetry. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198824527.001.0001.

Der volle Inhalt der Quelle
Annotation:
Fossil Poetry provides the first book-length overview of the place of Anglo-Saxon in nineteenth-century poetry in English. It addresses the use of Anglo-Saxon as a resource by Romantic and Victorian poets in their own compositions, as well as the construction and ‘invention’ of Anglo-Saxon in and by nineteenth-century poetry. Fossil Poetry takes its title from a famous passage on ‘early’ language in the essays of Ralph Waldo Emerson, and uses the metaphor of the fossil to contextualize poetic Anglo-Saxonism within the developments that had been taking place in the fields of geology, palaeontology, and the evolutionary life sciences since James Hutton’s apprehension of ‘deep time’ in his 1788 Theory of the Earth. Fossil Poetry argues that two phases of poetic Anglo-Saxonism took place over the course of the nineteenth century: firstly, a phase of ‘constant roots’ whereby Anglo-Saxon is constructed to resemble, and so aetiologically to legitimize, a tradition of English Romanticism conceived as essential and unchanging; secondly, a phase in which the strangeness of many of the ‘extinct’ philological forms of early English is acknowledged, and becomes concurrent with a desire to recover and recuperate the fossils of Anglo-Saxon within contemporary English poetry. A wide range of eighteenth- and nineteenth-century works of antiquarianism, philology, and Anglo-Saxon scholarship forms the evidential base that underpins the advancement of these two models for understanding the place of Anglo-Saxon in nineteenth-century poetry. New archival research and readings of unpublished papers by Tennyson, Whitman, and Morris is also presented here for the first time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Contextualized Language Models"

1

Straka, Milan, Jakub Náplava, Jana Straková und David Samuel. „RobeCzech: Czech RoBERTa, a Monolingual Contextualized Language Representation Model“. In Text, Speech, and Dialogue, 197–209. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83527-9_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sarhan, Injy, und Marco R. Spruit. „Contextualized Word Embeddings in a Neural Open Information Extraction Model“. In Natural Language Processing and Information Systems, 359–67. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23281-8_31.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Harrison, S. J. „The Poetics of Fiction: Poetic Influence on the Language of Apuleius’ Metamorphoses“. In Aspects of the Language of Latin Prose. British Academy, 2005. http://dx.doi.org/10.5871/bacad/9780197263327.003.0013.

Der volle Inhalt der Quelle
Annotation:
This chapter describes the role of poetic language in Apuleius’ Metamorphoses, and how that role has been differently perceived in Apuleian scholarship. The Metamorphoses is the Apuleian work in which poetic elements figure most prominently, for which its kinship with poetry through its character as a literary fiction is a primary explanation. The examples discussed come from especially marked moments of the novel’s plot, since it is at such moments that poetic allusion and language are likely to come to the fore, but poeticism is a universal strategy in the Metamorphoses. The chapter also argues that traditional views about poetic elements in the novel as evidence of post-classical Latinity can be superseded by a more positive and contextualized approach which opens up new perspectives on Apuleius’ place in the history of Latin prose. It then addresses a further case in which poetic language contributes to impressive and heightened scenes in the novel, and in which one can see the diction of Apuleian prose not only employing existing poetic terms, but creating new ones by analogy with poetic models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hancı-Azizoglu, Eda Başak, und Nurdan Kavaklı. „Creative Digital Writing“. In Digital Pedagogies and the Transformation of Language Education, 250–66. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-6745-6.ch013.

Der volle Inhalt der Quelle
Annotation:
Second language writers can adapt their creative skills to acquire and reflect new knowledge with fewer sophisticated words through more contextual and inclusive language. This process is called using the poetic function of language in a second language. One way to achieve teaching the poetic function of language as part of creative writing activities to second language learners is modeling digital writing in creative and innovative forms. This research study contextualizes a digital, innovative, and culturally sensitive language learning model that will enhance digital natives' learning experience through creative digital writing practices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Martinez, Martha I., Anya Hurwitz, Jennifer Analla, Laurie Olsen und Joanna Meadvin. „Cultivating Rich Language Development, Deep Learning, and Joyful Classrooms for English Learners“. In Handbook of Research on Engaging Immigrant Families and Promoting Academic Success for English Language Learners, 269–93. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8283-0.ch014.

Der volle Inhalt der Quelle
Annotation:
Although there is general consensus among educators of English learners (ELs) regarding the need for contextualized language development, it is not widely implemented. This chapter explains the theory behind this shift in teaching English language development and for teaching ELs in general. The chapter also discusses the kind of professional development teachers need to make this shift, and the importance of meaningful engagement of families in their children's learning. The chapter situates this discussion within the Sobrato Early Academic Language (SEAL) model's work with schools across California. SEAL is a PK–Grade 3 comprehensive reform focused on the needs of English learners, and is designed to create a language-rich, joyful, and rigorous education. California is an important context given the state's large EL population and recent favorable shifts in educational policy, which provide a unique opportunity for laying a foundation for improved practices and outcomes for numerous English learners.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Davies, Joshua. „The language of gesture: Untimely bodies and contemporary performance“. In Visions and ruins. Manchester University Press, 2018. http://dx.doi.org/10.7228/manchester/9781526125934.003.0005.

Der volle Inhalt der Quelle
Annotation:
This chapter explores the medieval interests of two twenty-first century pieces of art: Elizabeth Price’s immersive video installation, The Woolworths Choir of 1979 (2012), and Michael Landy’s Saints Alive (2013). Both of these works turn to medieval culture in order to examine the untimeliness of the body and this chapter traces their sources and explores how their work speaks with, and to, medieval representations of the body. It contextualises Price and Landy’s work with explorations of medieval effigies and the Middle English poem St Erkenwald. The methodology of this chapter is informed by Aby Warburg’s work on gesture in early modern art and interrogates moments of contact and communication across time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Green, Sharon E., und Mason Gordon. „Teaching Literacy through Technology in the Middle School“. In Academic Knowledge Construction and Multimodal Curriculum Development, 230–42. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4797-8.ch014.

Der volle Inhalt der Quelle
Annotation:
This chapter explores the development of 21st century literacies at the middle school level. A case study, situated in an international middle school that integrates literacy and digital technologies, contextualizes this discussion. Participants included a cross-section of teachers and students. A key theme related to teachers was the merging of traditional and modern literacies: for students, literacy as language arts. Advantages to the fusion of literacy and digital technologies were (1) students' increasing their global awareness, (2) creativity in demonstration of learning, and (3) the abundance of resources. Distractibility, technology glitches, and the notion of breadth versus depth with regard to learning were challenges in this highly advanced technological institution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Martin, Alison E. „Styling Science“. In Nature Translated, 22–39. Edinburgh University Press, 2018. http://dx.doi.org/10.3366/edinburgh/9781474439329.003.0002.

Der volle Inhalt der Quelle
Annotation:
This chapter sets out the critical and historical framework of this study by focusing on three key aspects. Firstly, it attends to the characteristic features of and changes in scientific writing in nineteenth-century Britain, taking examples from the work of Charles Lyell, Michael Faraday and Charles Darwin, to establish a sense of the repertoire of stylistic models on which Humboldt’s British translators could draw. A second section examines Humboldt’s own writing style and briefly addresses the difficulties inherent in translating it from the French- or German-language originals. Finally, this chapter focuses in particular on style in translation and explores the swiftly evolving field of translation theory in the nineteenth century, drawing in particular on Friedrich Schleiermacher, to contextualise the strategies that Humboldt’s translators were employing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Browne, Craig, und Andrew P. Lynch. „Introduction“. In Taylor and Politics, 1–16. Edinburgh University Press, 2018. http://dx.doi.org/10.3366/edinburgh/9780748691937.003.0001.

Der volle Inhalt der Quelle
Annotation:
The Introduction to Taylor and Politics: A Critical Introduction provides an overview of the aims and content of the book. The chapter assesses the importance of studying Taylor’s contribution to social and political debates, and contextualises his work among the efforts of his peers, as well as within the intellectual currents of recent scholarship, including behaviourism, Marxism, analytical philosophy, and postmodernism. The Introduction provides an overview of each chapter of the book, and highlights key themes that are examined, such as romanticism, modernity, democracy, recognition, modern social imaginaries, and religion and secularism. Language, and multiculturalism, issues which Taylor has examined throughout his career are also highlighted. Finally, the Introduction outlines the approach that this book takes when examining Taylor’s contribution to politics and social discourse.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Perler, Dominik. „The Alienation Effect in the Historiography of Philosophy“. In Philosophy and the Historical Perspective, herausgegeben von Marcel van Ackeren und Lee Klein, 140–54. British Academy, 2018. http://dx.doi.org/10.5871/bacad/9780197266298.003.0009.

Der volle Inhalt der Quelle
Annotation:
It has often been said that we should enter into a dialogue with thinkers of the past because they discussed the same problems we still have today and presented sophisticated solutions to them. I argue that this ‘dialogue model’ ignores the specific context in which many problems were created and defined. A closer look at various contexts enables us to see that philosophical problems are not as natural as they might seem. When we contextualise them, we experience a healthy alienation effect: we realise that problems discussed in the past depend on assumptions that are far from being self-evident. When we then compare these assumptions to our own, we reflect on our own theoretical framework that is not self-evident either. This leads to a denaturalisation of philosophical problems—in the past as well as in the present. The author argues for this thesis by examining late medieval discussions on mental language.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Contextualized Language Models"

1

Shaghaghian, Shohreh, Luna Yue Feng, Borna Jafarpour und Nicolai Pogrebnyakov. „Customizing Contextualized Language Models for Legal Document Reviews“. In 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020. http://dx.doi.org/10.1109/bigdata50022.2020.9378201.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ross, Hayley, Jonathon Cai und Bonan Min. „Exploring Contextualized Neural Language Models for Temporal Dependency Parsing“. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.689.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Okamoto, Takayoshi, Tetsuya Honda und Koji Eguchi. „Locally contextualized smoothing of language models for sentiment sentence retrieval“. In Proceeding of the 1st international CIKM workshop. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1651461.1651475.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

He, Bin, Di Zhou, Jinghui Xiao, Xin Jiang, Qun Liu, Nicholas Jing Yuan und Tong Xu. „BERT-MK: Integrating Graph Contextualized Knowledge into Pre-trained Language Models“. In Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.207.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wang, Yixiao, Zied Bouraoui, Luis Espinosa Anke und Steven Schockaert. „Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection“. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021). Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.repl4nlp-1.19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ginzburg, Dvir, Itzik Malkiel, Oren Barkan, Avi Caciularu und Noam Koenigstein. „Self-Supervised Document Similarity Ranking via Contextualized Language Models and Hierarchical Inference“. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-acl.272.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Minh Le, Thao, Vuong Le, Svetha Venkatesh und Truyen Tran. „Dynamic Language Binding in Relational Visual Reasoning“. In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/114.

Der volle Inhalt der Quelle
Annotation:
We present Language-binding Object Graph Network, the first neural reasoning method with dynamic relational structures across both visual and textual domains with applications in visual question answering. Relaxing the common assumption made by current models that the object predicates pre-exist and stay static, passive to the reasoning process, we propose that these dynamic predicates expand across the domain borders to include pair-wise visual-linguistic object binding. In our method, these contextualized object links are actively found within each recurrent reasoning step without relying on external predicative priors. These dynamic structures reflect the conditional dual-domain object dependency given the evolving context of the reasoning through co-attention. Such discovered dynamic graphs facilitate multi-step knowledge combination and refinements that iteratively deduce the compact representation of the final answer. The effectiveness of this model is demonstrated on image question answering demonstrating favorable performance on major VQA datasets. Our method outperforms other methods in sophisticated question-answering tasks wherein multiple object relations are involved. The graph structure effectively assists the progress of training, and therefore the network learns efficiently compared to other reasoning models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Qin, Libo, Minheng Ni, Yue Zhang und Wanxiang Che. „CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot Cross-Lingual NLP“. In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/533.

Der volle Inhalt der Quelle
Annotation:
Multi-lingual contextualized embeddings, such as multilingual-BERT (mBERT), have shown success in a variety of zero-shot cross-lingual tasks. However, these models are limited by having inconsistent contextualized representations of subwords across different languages. Existing work addresses this issue by bilingual projection and fine-tuning technique. We propose a data augmentation framework to generate multi-lingual code-switching data to fine-tune mBERT, which encourages model to align representations from source and multiple target languages once by mixing their context information. Compared with the existing work, our method does not rely on bilingual sentences for training, and requires only one training process for multiple target languages. Experimental results on five tasks with 19 languages show that our method leads to significantly improved performances for all the tasks compared with mBERT.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Chen, Zhiyu, Mohamed Trabelsi, Jeff Heflin, Yinan Xu und Brian D. Davison. „Table Search Using a Deep Contextualized Language Model“. In SIGIR '20: The 43rd International ACM SIGIR conference on research and development in Information Retrieval. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3397271.3401044.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Liu, Liyuan, Xiang Ren, Jingbo Shang, Xiaotao Gu, Jian Peng und Jiawei Han. „Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling“. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1153.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie