To see the other types of publications on this topic, follow the link: Contextualized Language Models.

Journal articles on the topic 'Contextualized Language Models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Contextualized Language Models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Myagmar, Batsergelen, Jie Li, and Shigetomo Kimura. "Cross-Domain Sentiment Classification With Bidirectional Contextualized Transformer Language Models." IEEE Access 7 (2019): 163219–30. http://dx.doi.org/10.1109/access.2019.2952360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

El Adlouni, Yassine, Noureddine En Nahnahi, Said Ouatik El Alaoui, Mohammed Meknassi, Horacio Rodríguez, and Nabil Alami. "Arabic Biomedical Community Question Answering Based on Contextualized Embeddings." International Journal of Intelligent Information Technologies 17, no. 3 (July 2021): 13–29. http://dx.doi.org/10.4018/ijiit.2021070102.

Full text
Abstract:
Community question answering has become increasingly important as they are practical for seeking and sharing information. Applying deep learning models often leads to good performance, but it requires an extensive amount of annotated data, a problem exacerbated for languages suffering a scarcity of resources. Contextualized language representation models have gained success due to promising results obtained on a wide array of downstream natural language processing tasks such as text classification, textual entailment, and paraphrase identification. This paper presents a novel approach by fine-tuning contextualized embeddings for a medical domain community question answering task. The authors propose an architecture combining two neural models powered by pre-trained contextual embeddings to learn a sentence representation and thereafter fine-tuned on the task to compute a score used for both ranking and classification. The experimental results on SemEval Task 3 CQA show that the model significantly outperforms the state-of-the-art models by almost 2% for the '16 edition and 1% for the '17 edition.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Xuhui, Yue Zhang, Leyang Cui, and Dandan Huang. "Evaluating Commonsense in Pre-Trained Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9733–40. http://dx.doi.org/10.1609/aaai.v34i05.6523.

Full text
Abstract:
Contextualized representations trained over large raw text data have given remarkable improvements for NLP tasks including question answering and reading comprehension. There have been works showing that syntactic, semantic and word sense knowledge are contained in such representations, which explains why they benefit such tasks. However, relatively little work has been done investigating commonsense knowledge contained in contextualized representations, which is crucial for human question answering and reading comprehension. We study the commonsense ability of GPT, BERT, XLNet, and RoBERTa by testing them on seven challenging benchmarks, finding that language modeling and its variants are effective objectives for promoting models' commonsense ability while bi-directional context and larger training set are bonuses. We additionally find that current models do poorly on tasks require more necessary inference steps. Finally, we test the robustness of models by making dual test cases, which are correlated so that the correct prediction of one sample should lead to correct prediction of the other. Interestingly, the models show confusion on these test cases, which suggests that they learn commonsense at the surface rather than the deep level. We release a test set, named CATs publicly, for future research.
APA, Harvard, Vancouver, ISO, and other styles
4

Yan, Huijiong, Tao Qian, Liang Xie, and Shanguang Chen. "Unsupervised cross-lingual model transfer for named entity recognition with contextualized word representations." PLOS ONE 16, no. 9 (September 21, 2021): e0257230. http://dx.doi.org/10.1371/journal.pone.0257230.

Full text
Abstract:
Named entity recognition (NER) is one fundamental task in the natural language processing (NLP) community. Supervised neural network models based on contextualized word representations can achieve highly-competitive performance, which requires a large-scale manually-annotated corpus for training. While for the resource-scarce languages, the construction of such as corpus is always expensive and time-consuming. Thus, unsupervised cross-lingual transfer is one good solution to address the problem. In this work, we investigate the unsupervised cross-lingual NER with model transfer based on contextualized word representations, which greatly advances the cross-lingual NER performance. We study several model transfer settings of the unsupervised cross-lingual NER, including (1) different types of the pretrained transformer-based language models as input, (2) the exploration strategies of the multilingual contextualized word representations, and (3) multi-source adaption. In particular, we propose an adapter-based word representation method combining with parameter generation network (PGN) better to capture the relationship between the source and target languages. We conduct experiments on a benchmark ConLL dataset involving four languages to simulate the cross-lingual setting. Results show that we can obtain highly-competitive performance by cross-lingual model transfer. In particular, our proposed adapter-based PGN model can lead to significant improvements for cross-lingual NER.
APA, Harvard, Vancouver, ISO, and other styles
5

Schumacher, Elliot, and Mark Dredze. "Learning unsupervised contextual representations for medical synonym discovery." JAMIA Open 2, no. 4 (November 4, 2019): 538–46. http://dx.doi.org/10.1093/jamiaopen/ooz057.

Full text
Abstract:
Abstract Objectives An important component of processing medical texts is the identification of synonymous words or phrases. Synonyms can inform learned representations of patients or improve linking mentioned concepts to medical ontologies. However, medical synonyms can be lexically similar (“dilated RA” and “dilated RV”) or dissimilar (“cerebrovascular accident” and “stroke”); contextual information can determine if 2 strings are synonymous. Medical professionals utilize extensive variation of medical terminology, often not evidenced in structured medical resources. Therefore, the ability to discover synonyms, especially without reliance on training data, is an important component in processing training notes. The ability to discover synonyms from models trained on large amounts of unannotated data removes the need to rely on annotated pairs of similar words. Models relying solely on non-annotated data can be trained on a wider variety of texts without the cost of annotation, and thus may capture a broader variety of language. Materials and Methods Recent contextualized deep learning representation models, such as ELMo (Peters et al., 2019) and BERT, (Devlin et al. 2019) have shown strong improvements over previous approaches in a broad variety of tasks. We leverage these contextualized deep learning models to build representations of synonyms, which integrate the context of surrounding sentence and use character-level models to alleviate out-of-vocabulary issues. Using these models, we perform unsupervised discovery of likely synonym matches, which reduces the reliance on expensive training data. Results We use the ShARe/CLEF eHealth Evaluation Lab 2013 Task 1b data to evaluate our synonym discovery method. Comparing our proposed contextualized deep learning representations to previous non-neural representations, we find that the contextualized representations show consistent improvement over non-contextualized models in all metrics. Conclusions Our results show that contextualized models produce effective representations for synonym discovery. We expect that the use of these representations in other tasks would produce similar gains in performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Schick, Timo, and Hinrich Schütze. "Rare Words: A Major Problem for Contextualized Embeddings and How to Fix it by Attentive Mimicking." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8766–74. http://dx.doi.org/10.1609/aaai.v34i05.6403.

Full text
Abstract:
Pretraining deep neural network architectures with a language modeling objective has brought large improvements for many natural language processing tasks. Exemplified by BERT, a recently proposed such architecture, we demonstrate that despite being trained on huge amounts of data, deep language models still struggle to understand rare words. To fix this problem, we adapt Attentive Mimicking, a method that was designed to explicitly learn embeddings for rare words, to deep language models. In order to make this possible, we introduce one-token approximation, a procedure that enables us to use Attentive Mimicking even when the underlying language model uses subword-based tokenization, i.e., it does not assign embeddings to all words. To evaluate our method, we create a novel dataset that tests the ability of language models to capture semantic properties of words without any task-specific fine-tuning. Using this dataset, we show that adding our adapted version of Attentive Mimicking to BERT does substantially improve its understanding of rare words.
APA, Harvard, Vancouver, ISO, and other styles
7

Strokach, Alexey, Tian Yu Lu, and Philip M. Kim. "ELASPIC2 (EL2): Combining Contextualized Language Models and Graph Neural Networks to Predict Effects of Mutations." Journal of Molecular Biology 433, no. 11 (May 2021): 166810. http://dx.doi.org/10.1016/j.jmb.2021.166810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dev, Sunipa, Tao Li, Jeff M. Phillips, and Vivek Srikumar. "On Measuring and Mitigating Biased Inferences of Word Embeddings." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7659–66. http://dx.doi.org/10.1609/aaai.v34i05.6267.

Full text
Abstract:
Word embeddings carry stereotypical connotations from the text they are trained on, which can lead to invalid inferences in downstream models that rely on them. We use this observation to design a mechanism for measuring stereotypes using the task of natural language inference. We demonstrate a reduction in invalid inferences via bias mitigation strategies on static word embeddings (GloVe). Further, we show that for gender bias, these techniques extend to contextualized embeddings when applied selectively only to the static components of contextualized embeddings (ELMo, BERT).
APA, Harvard, Vancouver, ISO, and other styles
9

Garí Soler, Aina, and Marianna Apidianaki. "Let’s Play Mono-Poly: BERT Can Reveal Words’ Polysemy Level and Partitionability into Senses." Transactions of the Association for Computational Linguistics 9 (2021): 825–44. http://dx.doi.org/10.1162/tacl_a_00400.

Full text
Abstract:
Pre-trained language models (LMs) encode rich information about linguistic structure but their knowledge about lexical polysemy remains unclear. We propose a novel experimental setup for analyzing this knowledge in LMs specifically trained for different languages (English, French, Spanish, and Greek) and in multilingual BERT. We perform our analysis on datasets carefully designed to reflect different sense distributions, and control for parameters that are highly correlated with polysemy such as frequency and grammatical category. We demonstrate that BERT-derived representations reflect words’ polysemy level and their partitionability into senses. Polysemy-related information is more clearly present in English BERT embeddings, but models in other languages also manage to establish relevant distinctions between words at different polysemy levels. Our results contribute to a better understanding of the knowledge encoded in contextualized representations and open up new avenues for multilingual lexical semantics research.
APA, Harvard, Vancouver, ISO, and other styles
10

Saha, Koustuv, Ted Grover, Stephen M. Mattingly, Vedant Das swain, Pranshu Gupta, Gonzalo J. Martinez, Pablo Robles-Granda, Gloria Mark, Aaron Striegel, and Munmun De Choudhury. "Person-Centered Predictions of Psychological Constructs with Social Media Contextualized by Multimodal Sensing." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no. 1 (March 19, 2021): 1–32. http://dx.doi.org/10.1145/3448117.

Full text
Abstract:
Personalized predictions have shown promises in various disciplines but they are fundamentally constrained in their ability to generalize across individuals. These models are often trained on limited datasets which do not represent the fluidity of human functioning. In contrast, generalized models capture normative behaviors between individuals but lack precision in predicting individual outcomes. This paper aims to balance the tradeoff between one-for-each and one-for-all models by clustering individuals on mutable behaviors and conducting cluster-specific predictions of psychological constructs in a multimodal sensing dataset of 754 individuals. Specifically, we situate our modeling on social media that has exhibited capability in inferring psychosocial attributes. We hypothesize that complementing social media data with offline sensor data can help to personalize and improve predictions. We cluster individuals on physical behaviors captured via Bluetooth, wearables, and smartphone sensors. We build contextualized models predicting psychological constructs trained on each cluster's social media data and compare their performance against generalized models trained on all individuals' data. The comparison reveals no difference in predicting affect and a decline in predicting cognitive ability, but an improvement in predicting personality, anxiety, and sleep quality. We construe that our approach improves predicting psychological constructs sharing theoretical associations with physical behavior. We also find how social media language associates with offline behavioral contextualization. Our work bears implications in understanding the nuanced strengths and weaknesses of personalized predictions, and how the effectiveness may vary by multiple factors. This work reveals the importance of taking a critical stance on evaluating the effectiveness before investing efforts in personalization.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Zhuosheng, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. "Semantics-Aware BERT for Language Understanding." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9628–35. http://dx.doi.org/10.1609/aaai.v34i05.6510.

Full text
Abstract:
The latest work on language representations carefully integrates contextualized features into language model training, which enables a series of success especially in various machine reading comprehension and natural language inference tasks. However, the existing language representation models including ELMo, GPT and BERT only exploit plain context-sensitive features such as character or word embeddings. They rarely consider incorporating structured semantic information which can provide rich semantics for language representation. To promote natural language understanding, we propose to incorporate explicit contextual semantics from pre-trained semantic role labeling, and introduce an improved language representation model, Semantics-aware BERT (SemBERT), which is capable of explicitly absorbing contextual semantics over a BERT backbone. SemBERT keeps the convenient usability of its BERT precursor in a light fine-tuning way without substantial task-specific modifications. Compared with BERT, semantics-aware BERT is as simple in concept but more powerful. It obtains new state-of-the-art or substantially improves results on ten reading comprehension and language inference tasks.
APA, Harvard, Vancouver, ISO, and other styles
12

Mercer, Sarah, and Peter D. MacIntyre. "Introducing positive psychology to SLA." Studies in Second Language Learning and Teaching 4, no. 2 (January 1, 2014): 153–72. http://dx.doi.org/10.14746/ssllt.2014.4.2.2.

Full text
Abstract:
Positive psychology is a rapidly expanding subfield in psychology that has important implications for the field of second language acquisition (SLA). This paper introduces positive psychology to the study of language by describing its key tenets. The potential contributions of positive psychology are contextualized with reference to prior work, including the humanistic movement in language teaching, models of motivation, the concept of an affective filter, studies of the good language learner, and the concepts related to the self. There are reasons for both encouragement and caution as studies inspired by positive psychology are undertaken. Papers in this special issue of SSLLT cover a range of quantitative and qualitative methods with implications for theory, research, and teaching practice. The special issue serves as a springboard for future research in SLA under the umbrella of positive psychology.
APA, Harvard, Vancouver, ISO, and other styles
13

Syed, Muzamil Hussain, and Sun-Tae Chung. "MenuNER: Domain-Adapted BERT Based NER Approach for a Domain with Limited Dataset and Its Application to Food Menu Domain." Applied Sciences 11, no. 13 (June 28, 2021): 6007. http://dx.doi.org/10.3390/app11136007.

Full text
Abstract:
Entity-based information extraction is one of the main applications of Natural Language Processing (NLP). Recently, deep transfer-learning utilizing contextualized word embedding from pre-trained language models has shown remarkable results for many NLP tasks, including Named-entity recognition (NER). BERT (Bidirectional Encoder Representations from Transformers) is gaining prominent attention among various contextualized word embedding models as a state-of-the-art pre-trained language model. It is quite expensive to train a BERT model from scratch for a new application domain since it needs a huge dataset and enormous computing time. In this paper, we focus on menu entity extraction from online user reviews for the restaurant and propose a simple but effective approach for NER task on a new domain where a large dataset is rarely available or difficult to prepare, such as food menu domain, based on domain adaptation technique for word embedding and fine-tuning the popular NER task network model ‘Bi-LSTM+CRF’ with extended feature vectors. The proposed NER approach (named as ‘MenuNER’) consists of two step-processes: (1) Domain adaptation for target domain; further pre-training of the off-the-shelf BERT language model (BERT-base) in semi-supervised fashion on a domain-specific dataset, and (2) Supervised fine-tuning the popular Bi-LSTM+CRF network for downstream task with extended feature vectors obtained by concatenating word embedding from the domain-adapted pre-trained BERT model from the first step, character embedding and POS tag feature information. Experimental results on handcrafted food menu corpus from customers’ review dataset show that our proposed approach for domain-specific NER task, that is: food menu named-entity recognition, performs significantly better than the one based on the baseline off-the-shelf BERT-base model. The proposed approach achieves 92.5% F1 score on the YELP dataset for the MenuNER task.
APA, Harvard, Vancouver, ISO, and other styles
14

Wei, Hao, Mingyuan Gao, Ai Zhou, Fei Chen, Wen Qu, Yijia Zhang, and Mingyu Lu. "A Multichannel Biomedical Named Entity Recognition Model Based on Multitask Learning and Contextualized Word Representations." Wireless Communications and Mobile Computing 2020 (August 10, 2020): 1–13. http://dx.doi.org/10.1155/2020/8894760.

Full text
Abstract:
As the biomedical literature increases exponentially, biomedical named entity recognition (BNER) has become an important task in biomedical information extraction. In the previous studies based on deep learning, pretrained word embedding becomes an indispensable part of the neural network models, effectively improving their performance. However, the biomedical literature typically contains numerous polysemous and ambiguous words. Using fixed pretrained word representations is not appropriate. Therefore, this paper adopts the pretrained embeddings from language models (ELMo) to generate dynamic word embeddings according to context. In addition, in order to avoid the problem of insufficient training data in specific fields and introduce richer input representations, we propose a multitask learning multichannel bidirectional gated recurrent unit (BiGRU) model. Multiple feature representations (e.g., word-level, contextualized word-level, character-level) are, respectively, or collectively fed into the different channels. Manual participation and feature engineering can be avoided through automatic capturing features in BiGRU. In merge layer, multiple methods are designed to integrate the outputs of multichannel BiGRU. We combine BiGRU with the conditional random field (CRF) to address labels’ dependence in sequence labeling. Moreover, we introduce the auxiliary corpora with same entity types for the main corpora to be evaluated in multitask learning framework, then train our model on these separate corpora and share parameters with each other. Our model obtains promising results on the JNLPBA and NCBI-disease corpora, with F1-scores of 76.0% and 88.7%, respectively. The latter achieves the best performance among reported existing feature-based models.
APA, Harvard, Vancouver, ISO, and other styles
15

Gamallo, Pablo. "Compositional Distributional Semantics with Syntactic Dependencies and Selectional Preferences." Applied Sciences 11, no. 12 (June 21, 2021): 5743. http://dx.doi.org/10.3390/app11125743.

Full text
Abstract:
This article describes a compositional model based on syntactic dependencies which has been designed to build contextualized word vectors, by following linguistic principles related to the concept of selectional preferences. The compositional strategy proposed in the current work has been evaluated on a syntactically controlled and multilingual dataset, and compared with Transformer BERT-like models, such as Sentence BERT, the state-of-the-art in sentence similarity. For this purpose, we created two new test datasets for Portuguese and Spanish on the basis of that defined for the English language, containing expressions with noun-verb-noun transitive constructions. The results we have obtained show that the linguistic-based compositional approach turns out to be competitive with Transformer models.
APA, Harvard, Vancouver, ISO, and other styles
16

Jouffroy, Jordan, Sarah F. Feldman, Ivan Lerner, Bastien Rance, Anita Burgun, and Antoine Neuraz. "Hybrid Deep Learning for Medication-Related Information Extraction From Clinical Texts in French: MedExt Algorithm Development Study." JMIR Medical Informatics 9, no. 3 (March 16, 2021): e17934. http://dx.doi.org/10.2196/17934.

Full text
Abstract:
Background Information related to patient medication is crucial for health care; however, up to 80% of the information resides solely in unstructured text. Manual extraction is difficult and time-consuming, and there is not a lot of research on natural language processing extracting medical information from unstructured text from French corpora. Objective We aimed to develop a system to extract medication-related information from clinical text written in French. Methods We developed a hybrid system combining an expert rule–based system, contextual word embedding (embedding for language model) trained on clinical notes, and a deep recurrent neural network (bidirectional long short term memory–conditional random field). The task consisted of extracting drug mentions and their related information (eg, dosage, frequency, duration, route, condition). We manually annotated 320 clinical notes from a French clinical data warehouse to train and evaluate the model. We compared the performance of our approach to those of standard approaches: rule-based or machine learning only and classic word embeddings. We evaluated the models using token-level recall, precision, and F-measure. Results The overall F-measure was 89.9% (precision 90.8; recall: 89.2) when combining expert rules and contextualized embeddings, compared to 88.1% (precision 89.5; recall 87.2) without expert rules or contextualized embeddings. The F-measures for each category were 95.3% for medication name, 64.4% for drug class mentions, 95.3% for dosage, 92.2% for frequency, 78.8% for duration, and 62.2% for condition of the intake. Conclusions Associating expert rules, deep contextualized embedding, and deep neural networks improved medication information extraction. Our results revealed a synergy when associating expert knowledge and latent knowledge.
APA, Harvard, Vancouver, ISO, and other styles
17

WOOLSEY, DANIEL. "From theory to research: Contextual predictors of “estar + adjective” and the study of the SLA of Spanish copula choice." Bilingualism: Language and Cognition 11, no. 3 (November 2008): 277–95. http://dx.doi.org/10.1017/s1366728908003519.

Full text
Abstract:
The current study addresses the challenge of investigating the SLA of estar with adjectives when highlighting two specific contextual meanings: comparisons within an individual frame of reference and speaker reactions as a result of immediate experience with the referent. Estar is examined within these contexts using a picture-description task and a contextualized preference task specifically designed to create clear and unambiguous contexts of comparison and immediate experience. One hundred and eleven English-speaking Spanish students at four different levels of proficiency participated in the research. Findings from the study are examined in relation to recent predictive models as well as future directions for the study of the SLA of Spanish copula choice.
APA, Harvard, Vancouver, ISO, and other styles
18

Scarlini, Bianca, Tommaso Pasini, and Roberto Navigli. "SensEmBERT: Context-Enhanced Sense Embeddings for Multilingual Word Sense Disambiguation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8758–65. http://dx.doi.org/10.1609/aaai.v34i05.6402.

Full text
Abstract:
Contextual representations of words derived by neural language models have proven to effectively encode the subtle distinctions that might occur between different meanings of the same word. However, these representations are not tied to a semantic network, hence they leave the word meanings implicit and thereby neglect the information that can be derived from the knowledge base itself. In this paper, we propose SensEmBERT, a knowledge-based approach that brings together the expressive power of language modelling and the vast amount of knowledge contained in a semantic network to produce high-quality latent semantic representations of word meanings in multiple languages. Our vectors lie in a space comparable with that of contextualized word embeddings, thus allowing a word occurrence to be easily linked to its meaning by applying a simple nearest neighbour approach.We show that, whilst not relying on manual semantic annotations, SensEmBERT is able to either achieve or surpass state-of-the-art results attained by most of the supervised neural approaches on the English Word Sense Disambiguation task. When scaling to other languages, our representations prove to be equally effective as their English counterpart and outperform the existing state of the art on all the Word Sense Disambiguation multilingual datasets. The embeddings are released in five different languages at http://sensembert.org.
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Yongbin, Xiaohua Wang, Linhu Hui, Liping Zou, Hongjin Li, Luo Xu, and Weihai Liu. "Chinese Clinical Named Entity Recognition in Electronic Medical Records: Development of a Lattice Long Short-Term Memory Model With Contextualized Character Representations." JMIR Medical Informatics 8, no. 9 (September 4, 2020): e19848. http://dx.doi.org/10.2196/19848.

Full text
Abstract:
Background Clinical named entity recognition (CNER), whose goal is to automatically identify clinical entities in electronic medical records (EMRs), is an important research direction of clinical text data mining and information extraction. The promotion of CNER can provide support for clinical decision making and medical knowledge base construction, which could then improve overall medical quality. Compared with English CNER, and due to the complexity of Chinese word segmentation and grammar, Chinese CNER was implemented later and is more challenging. Objective With the development of distributed representation and deep learning, a series of models have been applied in Chinese CNER. Different from the English version, Chinese CNER is mainly divided into character-based and word-based methods that cannot make comprehensive use of EMR information and cannot solve the problem of ambiguity in word representation. Methods In this paper, we propose a lattice long short-term memory (LSTM) model combined with a variant contextualized character representation and a conditional random field (CRF) layer for Chinese CNER: the Embeddings from Language Models (ELMo)-lattice-LSTM-CRF model. The lattice LSTM model can effectively utilize the information from characters and words in Chinese EMRs; in addition, the variant ELMo model uses Chinese characters as input instead of the character-encoding layer of the ELMo model, so as to learn domain-specific contextualized character embeddings. Results We evaluated our method using two Chinese CNER datasets from the China Conference on Knowledge Graph and Semantic Computing (CCKS): the CCKS-2017 CNER dataset and the CCKS-2019 CNER dataset. We obtained F1 scores of 90.13% and 85.02% on the test sets of these two datasets, respectively. Conclusions Our results show that our proposed method is effective in Chinese CNER. In addition, the results of our experiments show that variant contextualized character representations can significantly improve the performance of the model.
APA, Harvard, Vancouver, ISO, and other styles
20

Kastrati, Zenun, Lule Ahmedi, Arianit Kurti, Fatbardh Kadriu, Doruntina Murtezaj, and Fatbardh Gashi. "A Deep Learning Sentiment Analyser for Social Media Comments in Low-Resource Languages." Electronics 10, no. 10 (May 11, 2021): 1133. http://dx.doi.org/10.3390/electronics10101133.

Full text
Abstract:
During the pandemic, when people needed to physically distance, social media platforms have been one of the outlets where people expressed their opinions, thoughts, sentiments, and emotions regarding the pandemic situation. The core object of this research study is the sentiment analysis of peoples’ opinions expressed on Facebook regarding the current pandemic situation in low-resource languages. To do this, we have created a large-scale dataset comprising of 10,742 manually classified comments in the Albanian language. Furthermore, in this paper we report our efforts on the design and development of a sentiment analyser that relies on deep learning. As a result, we report the experimental findings obtained from our proposed sentiment analyser using various classifier models with static and contextualized word embeddings, that is, fastText and BERT, trained and validated on our collected and curated dataset. Specifically, the findings reveal that combining the BiLSTM with an attention mechanism achieved the highest performance on our sentiment analysis task, with an F1 score of 72.09%.
APA, Harvard, Vancouver, ISO, and other styles
21

Rivera Zavala, Renzo, and Paloma Martinez. "The Impact of Pretrained Language Models on Negation and Speculation Detection in Cross-Lingual Medical Text: Comparative Study." JMIR Medical Informatics 8, no. 12 (December 3, 2020): e18953. http://dx.doi.org/10.2196/18953.

Full text
Abstract:
Background Negation and speculation are critical elements in natural language processing (NLP)-related tasks, such as information extraction, as these phenomena change the truth value of a proposition. In the clinical narrative that is informal, these linguistic facts are used extensively with the objective of indicating hypotheses, impressions, or negative findings. Previous state-of-the-art approaches addressed negation and speculation detection tasks using rule-based methods, but in the last few years, models based on machine learning and deep learning exploiting morphological, syntactic, and semantic features represented as spare and dense vectors have emerged. However, although such methods of named entity recognition (NER) employ a broad set of features, they are limited to existing pretrained models for a specific domain or language. Objective As a fundamental subsystem of any information extraction pipeline, a system for cross-lingual and domain-independent negation and speculation detection was introduced with special focus on the biomedical scientific literature and clinical narrative. In this work, detection of negation and speculation was considered as a sequence-labeling task where cues and the scopes of both phenomena are recognized as a sequence of nested labels recognized in a single step. Methods We proposed the following two approaches for negation and speculation detection: (1) bidirectional long short-term memory (Bi-LSTM) and conditional random field using character, word, and sense embeddings to deal with the extraction of semantic, syntactic, and contextual patterns and (2) bidirectional encoder representations for transformers (BERT) with fine tuning for NER. Results The approach was evaluated for English and Spanish languages on biomedical and review text, particularly with the BioScope corpus, IULA corpus, and SFU Spanish Review corpus, with F-measures of 86.6%, 85.0%, and 88.1%, respectively, for NeuroNER and 86.4%, 80.8%, and 91.7%, respectively, for BERT. Conclusions These results show that these architectures perform considerably better than the previous rule-based and conventional machine learning–based systems. Moreover, our analysis results show that pretrained word embedding and particularly contextualized embedding for biomedical corpora help to understand complexities inherent to biomedical text.
APA, Harvard, Vancouver, ISO, and other styles
22

Sato, Masatoshi, and Kim McDonough. "PRACTICE IS IMPORTANT BUT HOW ABOUT ITS QUALITY?" Studies in Second Language Acquisition 41, no. 5 (April 25, 2019): 999–1026. http://dx.doi.org/10.1017/s0272263119000159.

Full text
Abstract:
AbstractThis study explored the impact of contextualized practice on second language (L2) learners’ production of wh-questions in the L2 classroom. It examined the quality of practice (correct vs. incorrect production) and the contribution of declarative knowledge to proceduralization. Thirty-four university-level English as a foreign language learners first completed a declarative knowledge test. Then, they engaged in various communicative activities over five weeks. Their production of wh-questions was coded for accuracy (absence of errors) and fluency (speech rate, mean length of pauses, and repair phenomena). Improvement was measured as the difference between the first and last practice sessions. The results showed that accuracy, speech rate, and pauses improved but with distinct patterns. Regression models showed that declarative knowledge did not predict accuracy or fluency; however, declarative knowledge assisted the learners to engage in targetlike behaviors at the initial stage of proceduralization. Furthermore, whereas production of accurate wh-questions predicted accuracy improvement, it had no impact on fluency.
APA, Harvard, Vancouver, ISO, and other styles
23

Dandala, Bharath, Venkata Joopudi, Ching-Huei Tsou, Jennifer J. Liang, and Parthasarathy Suryanarayanan. "Extraction of Information Related to Drug Safety Surveillance From Electronic Health Record Notes: Joint Modeling of Entities and Relations Using Knowledge-Aware Neural Attentive Models." JMIR Medical Informatics 8, no. 7 (July 10, 2020): e18417. http://dx.doi.org/10.2196/18417.

Full text
Abstract:
Background An adverse drug event (ADE) is commonly defined as “an injury resulting from medical intervention related to a drug.” Providing information related to ADEs and alerting caregivers at the point of care can reduce the risk of prescription and diagnostic errors and improve health outcomes. ADEs captured in structured data in electronic health records (EHRs) as either coded problems or allergies are often incomplete, leading to underreporting. Therefore, it is important to develop capabilities to process unstructured EHR data in the form of clinical notes, which contain a richer documentation of a patient’s ADE. Several natural language processing (NLP) systems have been proposed to automatically extract information related to ADEs. However, the results from these systems showed that significant improvement is still required for the automatic extraction of ADEs from clinical notes. Objective This study aims to improve the automatic extraction of ADEs and related information such as drugs, their attributes, and reason for administration from the clinical notes of patients. Methods This research was conducted using discharge summaries from the Medical Information Mart for Intensive Care III (MIMIC-III) database obtained through the 2018 National NLP Clinical Challenges (n2c2) annotated with drugs, drug attributes (ie, strength, form, frequency, route, dosage, duration), ADEs, reasons, and relations between drugs and other entities. We developed a deep learning–based system for extracting these drug-centric concepts and relations simultaneously using a joint method enhanced with contextualized embeddings, a position-attention mechanism, and knowledge representations. The joint method generated different sentence representations for each drug, which were then used to extract related concepts and relations simultaneously. Contextualized representations trained on the MIMIC-III database were used to capture context-sensitive meanings of words. The position-attention mechanism amplified the benefits of the joint method by generating sentence representations that capture long-distance relations. Knowledge representations were obtained from graph embeddings created using the US Food and Drug Administration Adverse Event Reporting System database to improve relation extraction, especially when contextual clues were insufficient. Results Our system achieved new state-of-the-art results on the n2c2 data set, with significant improvements in recognizing crucial drug−reason (F1=0.650 versus F1=0.579) and drug−ADE (F1=0.490 versus F1=0.476) relations. Conclusions This study presents a system for extracting drug-centric concepts and relations that outperformed current state-of-the-art results and shows that contextualized embeddings, position-attention mechanisms, and knowledge graph embeddings effectively improve deep learning–based concepts and relation extraction. This study demonstrates the potential for deep learning–based methods to help extract real-world evidence from unstructured patient data for drug safety surveillance.
APA, Harvard, Vancouver, ISO, and other styles
24

Sabernig, Katharina. "Metaphors in the Tibetan Explanatory Tantra." Religions 10, no. 6 (May 28, 2019): 346. http://dx.doi.org/10.3390/rel10060346.

Full text
Abstract:
The development of medical theories and concepts is not isolated from the societal “Zeitgeist” of any medical culture. Depending on the purpose and the audience addressed, different metaphors are used to explain different medical content. Doubtlessly, Tibetan medicine is associated with Tibetan Buddhism and various medical topics are linked to Buddhist knowledge. In addition to the religious link, medical texts and terms also make use of nomadic or even military metaphor. In anatomical language, metaphor and metonym are usually based on visual or morphological similarities. In the case of physiological, pathological, or therapeutic processes, metaphor often deals with dynamic and strategic elements drawn from comparisons with everyday life and other spheres of activity. These models commonly relate to specific historical and cultural backgrounds. Let us think of the European “body republic” in Renaissance medical theory or the theory of the “cell state” devised by Rudolf Virchow (1821–1902), which explains the concept of cellular pathology. Asian examples that use state functions as metaphors for the hierarchy of internal organs in Chinese and Tibetan medicine are well-known. In addition to these prominent state models, Tibetan medical language and its visual representation is rich in metaphor. In this preliminary paper not all occurring metaphors can be discussed in depth, however different types of Tibetan medical metaphor will be compared and contextualized with non-Tibetan metaphors from other contemporary and historical medical cultures.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Xuanyu. "CFGNN: Cross Flow Graph Neural Networks for Question Answering on Complex Tables." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9596–603. http://dx.doi.org/10.1609/aaai.v34i05.6506.

Full text
Abstract:
Question answering on complex tables is a challenging task for machines. In the Spider, a large-scale complex table dataset, relationships between tables and columns can be easily modeled as graph. But most of graph neural networks (GNNs) ignore the relationship of sibling nodes and use summation as aggregation function to model the relationship of parent-child nodes. It may cause nodes with less degrees, like column nodes in schema graph, to obtain little information. And the context information is important for natural language. To leverage more context information flow comprehensively, we propose novel cross flow graph neural networks in this paper. The information flows of parent-child and sibling nodes cross with history states between different layers. Besides, we use hierarchical encoding layer to obtain contextualized representation in tables. Experiments on the Spider show that our approach achieves substantial performance improvement comparing with previous GNN models and their variants.
APA, Harvard, Vancouver, ISO, and other styles
26

MacAvaney, Sean. "Effective and practical neural ranking." ACM SIGIR Forum 55, no. 1 (June 2021): 1–2. http://dx.doi.org/10.1145/3476415.3476432.

Full text
Abstract:
Supervised machine learning methods that use neural networks ("deep learning") have yielded substantial improvements to a multitude of Natural Language Processing (NLP) tasks in the past decade. Improvements to Information Retrieval (IR) tasks, such as ad-hoc search, lagged behind those in similar NLP tasks, despite considerable community efforts. Although there are several contributing factors, I argue in this dissertation that early attempts were not more successful because they did not properly consider the unique characteristics of IR tasks when designing and training ranking models. I first demonstrate this by showing how large-scale datasets containing weak relevance labels can successfully replace training on in-domain collections. This technique improves the variety of queries encountered when training and helps mitigate concerns of over-fitting particular test collections. I then show that dataset statistics available in specific IR tasks can be easily incorporated into neural ranking models alongside the textual features, resulting in more effective ranking models. I also demonstrate that contextualized representations, particularly those from transformer-based language models, considerably improve neural ad-hoc ranking performance. I find that this approach is neither limited to the task of ad-hoc ranking (as demonstrated by ranking clinical reports) nor English content (as shown by training effective cross-lingual neural rankers). These efforts demonstrate that neural approaches can be effective for ranking tasks. However, I observe that these techniques are impractical due to their high query-time computational costs. To overcome this, I study approaches for offloading computational cost to index-time, substantially reducing query-time latency. These techniques make neural methods practical for ranking tasks. Finally, I take a deep dive into better understanding the linguistic biases of the methods I propose compared to contemporary and traditional approaches. The findings from this analysis highlight potential pitfalls of recent methods and provide a way to measure progress in this area going forward.
APA, Harvard, Vancouver, ISO, and other styles
27

Vieira, Ana Gabriela Da Silva. "SURDEZ E VISUALIDADE NO ENSINO DE HISTÓRIA: UM ESTUDO DE CASO DE PESQUISA-AÇÃO." Cadernos de Educação Tecnologia e Sociedade 13, no. 4 (December 27, 2020): 401. http://dx.doi.org/10.14571/brajets.v13.n4.401-409.

Full text
Abstract:
Teaching in bilingual schools for the deaf, due to the specificities of the students served, whose communication occurs mainly through a sign and visual language, has been using visual resources in the classroom. This article discusses the use of these resources in History classes, based on research carried out in 2017, at a school for the deaf in a city in Rio Grande do Sul. This investigation used action research as a methodology, with the objective of to test methodologies and didactic resources for the teaching of History in deaf students. The use of visual resources for teaching history was explored - images, videos, collages, models, etc. - based on discussions in the area of Deaf Studies that understand visual experience as the basis of deaf culture. As results, it was understood that there are different ways of using visual resources, that images cannot be isolated from the content discussed in the classroom, on the contrary, they need to be contextualized so that deaf students are able to appropriate historical knowledge.
APA, Harvard, Vancouver, ISO, and other styles
28

Adamou, Evangelia, Stefano De Pascale, Yekaterina García-Márkina, and Cristian Padure. "Do bilinguals generalize estar more than monolinguals and what is the role of conceptual transfer?" International Journal of Bilingualism 23, no. 6 (December 5, 2018): 1549–80. http://dx.doi.org/10.1177/1367006918812175.

Full text
Abstract:
Aims and objectives/purpose/research questions: Among the questions that remain open is whether bilingualism leads to simplification of alternatives in language in order to reduce cognitive load. This hypothesis has been supported by evidence showing that bilinguals generalize the Spanish copula estar ‘to be’ faster than monolinguals. Yet, other studies found no such clear trend. While conceptual transfer could account for the conflicting evidence in the literature, its role has not been demonstrated. Our study aims to fill this gap by testing simplification in Spanish copula choice among bilinguals and, in particular, the role of transfer. Design/methodology/approach: We used a contextualized copula choice task, comprising 28 sentences. Data and analysis: Sixty Romani–Spanish bilinguals from Mexico responded to the questionnaire in both Spanish and Romani. A control group of 62 Mexican Spanish monolinguals responded in Spanish. We constructed generalized linear mixed-effects models to analyse the results. Findings/conclusions: Analysis of the results reveals greater extension of estar among bilinguals for individual-level predicates as well as for traits not susceptible to change. Comparison of the responses of bilinguals (in Romani and in Spanish) and of Spanish monolinguals indicates that Romani could be reinforcing the generalization of estar in the Spanish responses of bilinguals. Originality: To our knowledge, this is the first study to examine copula choice in bilingual mode. In addition, it brings evidence from an under-researched community with little normative pressure. Significance/implications: Our study shows that conceptual transfer may be driving the extension of estar among bilinguals.
APA, Harvard, Vancouver, ISO, and other styles
29

Mitra, Avijit, Bhanu Pratap Singh Rawat, David D. McManus, and Hong Yu. "Relation Classification for Bleeding Events From Electronic Health Records Using Deep Learning Systems: An Empirical Study." JMIR Medical Informatics 9, no. 7 (July 2, 2021): e27527. http://dx.doi.org/10.2196/27527.

Full text
Abstract:
Background Accurate detection of bleeding events from electronic health records (EHRs) is crucial for identifying and characterizing different common and serious medical problems. To extract such information from EHRs, it is essential to identify the relations between bleeding events and related clinical entities (eg, bleeding anatomic sites and lab tests). With the advent of natural language processing (NLP) and deep learning (DL)-based techniques, many studies have focused on their applicability for various clinical applications. However, no prior work has utilized DL to extract relations between bleeding events and relevant entities. Objective In this study, we aimed to evaluate multiple DL systems on a novel EHR data set for bleeding event–related relation classification. Methods We first expert annotated a new data set of 1046 deidentified EHR notes for bleeding events and their attributes. On this data set, we evaluated three state-of-the-art DL architectures for the bleeding event relation classification task, namely, convolutional neural network (CNN), attention-guided graph convolutional network (AGGCN), and Bidirectional Encoder Representations from Transformers (BERT). We used three BERT-based models, namely, BERT pretrained on biomedical data (BioBERT), BioBERT pretrained on clinical text (Bio+Clinical BERT), and BioBERT pretrained on EHR notes (EhrBERT). Results Our experiments showed that the BERT-based models significantly outperformed the CNN and AGGCN models. Specifically, BioBERT achieved a macro F1 score of 0.842, outperforming both the AGGCN (macro F1 score, 0.828) and CNN models (macro F1 score, 0.763) by 1.4% (P<.001) and 7.9% (P<.001), respectively. Conclusions In this comprehensive study, we explored and compared different DL systems to classify relations between bleeding events and other medical concepts. On our corpus, BERT-based models outperformed other DL models for identifying the relations of bleeding-related entities. In addition to pretrained contextualized word representation, BERT-based models benefited from the use of target entity representation over traditional sequence representation
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Yuxin, Christopher D. Andrews, Cindy E. Hmelo-Silver, and Cynthia D'Angelo. "Coding schemes as lenses on collaborative learning." Information and Learning Sciences 121, no. 1/2 (December 12, 2019): 1–18. http://dx.doi.org/10.1108/ils-08-2019-0079.

Full text
Abstract:
Purpose Computer-supported collaborative learning (CSCL) is widely used in different levels of education across disciplines and domains. Researchers in the field have proposed various conceptual frameworks toward a comprehensive understanding of CSCL. However, as the definition of CSCL is varied and contextualized, it is critical to develop a shared understanding of collaboration and common definitions for the metrics that are used. The purpose of this research is to present a synthesis that focuses explicitly on the types and features of coding schemes that are used as analytic tools for CSCL. Design/methodology/approach This research collected coding schemes from researchers with diverse backgrounds who participated in a series of workshops on collaborative learning and adaptive support in CSCL, as well as coding schemes from recent volumes of the International Journal of Computer-Supported Collaborative learning (ijCSCL). Each original coding scheme was reviewed to generate an empirically grounded framework that reflects collaborative learning models. Findings The analysis generated 13 categories, which were further classified into three domains: cognitive, social and integrated. Most coding schemes contained categories in the cognitive and integrated domains. Practical implications This synthesized coding scheme could be used as a toolkit for researchers to pay attention to the multiple and complex dimensions of collaborative learning and for developing a shared language of collaborative learning. Originality/value By analyzing a set of coding schemes, the authors highlight what CSCL researchers find important by making these implicit understandings of collaborative learning visible and by proposing a common language for researchers across disciplines to communicate by referencing a synthesized framework.
APA, Harvard, Vancouver, ISO, and other styles
31

Mahajan, Diwakar, Ananya Poddar, Jennifer J. Liang, Yen-Ting Lin, John M. Prager, Parthasarathy Suryanarayanan, Preethi Raghavan, and Ching-Huei Tsou. "Identification of Semantically Similar Sentences in Clinical Notes: Iterative Intermediate Training Using Multi-Task Learning." JMIR Medical Informatics 8, no. 11 (November 27, 2020): e22508. http://dx.doi.org/10.2196/22508.

Full text
Abstract:
Background Although electronic health records (EHRs) have been widely adopted in health care, effective use of EHR data is often limited because of redundant information in clinical notes introduced by the use of templates and copy-paste during note generation. Thus, it is imperative to develop solutions that can condense information while retaining its value. A step in this direction is measuring the semantic similarity between clinical text snippets. To address this problem, we participated in the 2019 National NLP Clinical Challenges (n2c2)/Open Health Natural Language Processing Consortium (OHNLP) clinical semantic textual similarity (ClinicalSTS) shared task. Objective This study aims to improve the performance and robustness of semantic textual similarity in the clinical domain by leveraging manually labeled data from related tasks and contextualized embeddings from pretrained transformer-based language models. Methods The ClinicalSTS data set consists of 1642 pairs of deidentified clinical text snippets annotated in a continuous scale of 0-5, indicating degrees of semantic similarity. We developed an iterative intermediate training approach using multi-task learning (IIT-MTL), a multi-task training approach that employs iterative data set selection. We applied this process to bidirectional encoder representations from transformers on clinical text mining (ClinicalBERT), a pretrained domain-specific transformer-based language model, and fine-tuned the resulting model on the target ClinicalSTS task. We incrementally ensembled the output from applying IIT-MTL on ClinicalBERT with the output of other language models (bidirectional encoder representations from transformers for biomedical text mining [BioBERT], multi-task deep neural networks [MT-DNN], and robustly optimized BERT approach [RoBERTa]) and handcrafted features using regression-based learning algorithms. On the basis of these experiments, we adopted the top-performing configurations as our official submissions. Results Our system ranked first out of 87 submitted systems in the 2019 n2c2/OHNLP ClinicalSTS challenge, achieving state-of-the-art results with a Pearson correlation coefficient of 0.9010. This winning system was an ensembled model leveraging the output of IIT-MTL on ClinicalBERT with BioBERT, MT-DNN, and handcrafted medication features. Conclusions This study demonstrates that IIT-MTL is an effective way to leverage annotated data from related tasks to improve performance on a target task with a limited data set. This contribution opens new avenues of exploration for optimized data set selection to generate more robust and universal contextual representations of text in the clinical domain.
APA, Harvard, Vancouver, ISO, and other styles
32

Rotman, Guy, and Roi Reichart. "Deep Contextualized Self-training for Low Resource Dependency Parsing." Transactions of the Association for Computational Linguistics 7 (November 2019): 695–713. http://dx.doi.org/10.1162/tacl_a_00294.

Full text
Abstract:
Neural dependency parsing has proven very effective, achieving state-of-the-art results on numerous domains and languages. Unfortunately, it requires large amounts of labeled data, which is costly and laborious to create. In this paper we propose a self-training algorithm that alleviates this annotation bottleneck by training a parser on its own output. Our Deep Contextualized Self-training (DCST) algorithm utilizes representation models trained on sequence labeling tasks that are derived from the parser’s output when applied to unlabeled data, and integrates these models with the base parser through a gating mechanism. We conduct experiments across multiple languages, both in low resource in-domain and in cross-domain setups, and demonstrate that DCST substantially outperforms traditional self-training as well as recent semi-supervised training methods. 1
APA, Harvard, Vancouver, ISO, and other styles
33

Olivares, Marisnel, Hélène Pigot, Carolina Bottari, Monica Lavoie, Taoufik Zayani, Nathalie Bier, Guylaine Le Dorze, et al. "Use of a Persona to Support the Interdisciplinary Design of an Assistive Technology for Meal Preparation in Traumatic Brain Injury." Interacting with Computers 32, no. 5-6 (September 2020): 435–56. http://dx.doi.org/10.1093/iwcomp/iwab002.

Full text
Abstract:
Abstract User-centered design (UCD) facilitates the creation of technologies that are specifically designed to answer users’ needs. This paper presents the first step of a UCD using a persona, a fictitious character representing the targeted population, which in this case is people having sustained a traumatic brain injury (TBI). The persona is used to better understand the possible interactions of a TBI population with a prototype of a technology that we wish to develop, namely the Cognitive Orthosis for coOKing (COOK). COOK is meant to be an assistive technology that will be designed to promote independence for cooking within a supported-living residence. More specifically, this paper presents the persona’s creation methodology based on the first four phases of the persona’s lifecycle. It also describes how the persona methodology served as a facilitator to initiate an interdisciplinary collaboration between a clinical team and a computer science team. Creation of personas relied on a clinical model (Disability Creation Process) that contextualized the needs of this population and an evaluation tool [Instrumental Activities of Daily Living (IADL) Profile] that presented a wide range of cognitive assistance needs found in this same population. This paper provides an in-depth description of some of the most frequent everyday difficulties experienced by individuals with TBI as well as the persona’s abilities, limitations and social participation during the realization of IADL, and an evaluation of the manifestations of these difficulties during IADL performance as represented through scenarios. The interdisciplinary team used the persona to complete a first description of the interactions of a persona with TBI with COOK. This work is an attempt at offering a communication tool, the persona, to facilitate interdisciplinary research among diverse disciplines who wish to learn to develop a common language, models and methodologies at the beginning of the design process.
APA, Harvard, Vancouver, ISO, and other styles
34

Wong, Lung-Hsiang, Ching Sing-Chai, and Guat Poh-Aw. "Seamless language learning: Second language learning with social media." Comunicar 25, no. 50 (January 1, 2017): 9–21. http://dx.doi.org/10.3916/c50-2017-01.

Full text
Abstract:
This conceptual paper describes a language learning model that applies social media to foster contextualized and connected language learning in communities. The model emphasizes weaving together different forms of language learning activities that take place in different learning contexts to achieve seamless language learning. It promotes social interactions with social media about the learners’ day-to-day life using the targeted second or foreign language. The paper first identifies three key features of the language learning approach, namely, authenticity, contextualization and socialization. How these features are related to the communicative approach of language learning are subsequently explicated. This is followed by further explication on how the notion of seamless language learning could inform learning designers and learners in synergizing the desired characteristics of language learning together. Eventually, we propose the SMILLA (Social MedIa as Language Learning Artifacts) Framework to operationalize seamless language learning with the use of social media. A case of seamless language learning environment design known as MyCloud will be described to illustrate the practicality of the SMILLA Framework. Este artículo describe un modelo de aprendizaje de lenguas que se sirve de las redes sociales para promover un aprendizaje contextualizado y conectado en comunidades. El modelo propone la interconexión entre diferentes tipos de actividades de aprendizaje en contextos diversos con el objetivo de lograr un aprendizaje discontinuo. Promueve las interacciones sociales a través de los medios compartiendo aspectos de la vida cotidiana en la lengua meta. Este trabajo identifica en primer lugar aspectos clave del enfoque de aprendizaje tales como la autenticidad, la contextualización y la socialización, al tiempo que explica cómo se relacionan estos aspectos con el enfoque comunicativo en el aprendizaje de lenguas. A continuación se presenta una discusión acerca de cómo la noción de aprendizaje discontinuo puede orientar a los creadores de materiales, docentes y aprendientes en la sinergia de todas las características del aprendizaje de lenguas. Para concluir, se propone el modelo SMILLA (Redes sociales como instrumentos para el aprendizaje de lenguas) para poner en práctica la noción de aprendizaje discontinuo con la ayuda de las redes sociales. Los resultados de su aplicación sugieren un potencial efecto sobre los aprendientes, generando usuarios más activos en contextos socialmente significativos, preparados para la autorreflexión sobre el uso que hacen de esa lengua, y con una menor necesidad de intervención del docente.
APA, Harvard, Vancouver, ISO, and other styles
35

Nair-Venugopal, Shanta. "An interactional model of English in Malaysia." Asian Business Discourse(s) Part II 16, no. 1 (May 11, 2006): 51–75. http://dx.doi.org/10.1075/japc.16.1.04nai.

Full text
Abstract:
This article argues for an interactional model of English as contextualised language use for localised business purposes. Two observations on the ground provided the impetus for the argument. One, that business communication skills training in English in Malaysia is invariably based on the prescribed usage of commercially produced materials. Two, that communication skills training in English is a lucrative model-dependent industry that supports the logic of the triumphalism of specific models of English as an international or global language (Smith 1983; Crystal 1997), or as the language of international capitalism. Yet a functional model of interaction operates actual workplace settings in Malaysia. Such evidence counters marketing mythologies of purportedly universal forms of language use in business contexts worldwide. It exposes the dichotomy that exists between the prescribed patterns of English usage such as those found in the plethora of commercially produced materials, and those of contextualised language use, as business discourse in real-time workplace interactions. Not least of all, it provides support for an indigenous model as an appropriate response to a pervasive global ideology at work. To ignore this phenomenon is to deny the pragmatic relevance of speaking English as one of the languages of localised business which is just as vital for national economies as the big business of international capitalism.
APA, Harvard, Vancouver, ISO, and other styles
36

Farías, Miguel, Katica Obilinovic, and Roxana Orrego. "Implications of Multimodal Learning Models for foreign language teaching and learning." Colombian Applied Linguistics Journal, no. 9 (April 4, 2011): 174. http://dx.doi.org/10.14483/22487085.3150.

Full text
Abstract:
This literature review article approaches the topic of information and communications technologies from the perspective of their impact on the language learning process, with particular emphasis on the most appropriate designs of multimodal texts as informed by models of multimodal learning. The first part contextualizes multimodality within the fields of discourse studies, the psychology of learning and CALL; the second, deals with multimodal conceptions of reading and writing by discussing hypertextuality and literacy. A final section outlines the possible implications of multimodal learning models for foreign language teaching and learning.
APA, Harvard, Vancouver, ISO, and other styles
37

Todorov, Petar V., Benjamin M. Gyori, John A. Bachman, and Peter K. Sorger. "INDRA-IPM: interactive pathway modeling using natural language with automated assembly." Bioinformatics 35, no. 21 (May 9, 2019): 4501–3. http://dx.doi.org/10.1093/bioinformatics/btz289.

Full text
Abstract:
Abstract Summary INDRA-IPM (Interactive Pathway Map) is a web-based pathway map modeling tool that combines natural language processing with automated model assembly and visualization. INDRA-IPM contextualizes models with expression data and exports them to standard formats. Availability and implementation INDRA-IPM is available at: http://pathwaymap.indra.bio. Source code is available at http://github.com/sorgerlab/indra_pathway_map. The underlying web service API is available at http://api.indra.bio:8000. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
38

Slack, Dean L., Mariann Hardey, and Noura Al Moubayed. "On the Hierarchical Information in a Single Contextualised Word Representation (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (April 3, 2020): 13917–18. http://dx.doi.org/10.1609/aaai.v34i10.7231.

Full text
Abstract:
Contextual word embeddings produced by neural language models, such as BERT or ELMo, have seen widespread application and performance gains across many Natural Language Processing tasks, suggesting rich linguistic features encoded in their representations. This work aims to investigate to what extent any linguistic hierarchical information is encoded into a single contextual embedding. Using labelled constituency trees, we train simple linear classifiers on top of single contextualised word representations for ancestor sentiment analysis tasks at multiple constituency levels of a sentence. To assess the presence of hierarchical information throughout the networks, the linear classifiers are trained using representations produced by each intermediate layer of BERT and ELMo variants. We show that with no fine-tuning, a single contextualised representation encodes enough syntactic and semantic sentence-level information to significantly outperform a non-contextual baseline for classifying 5-class sentiment of its ancestor constituents at multiple levels of the constituency tree. Additionally, we show that both LSTM and transformer architectures trained on similarly sized datasets achieve similar levels of performance on these tasks. Future work looks to expand the analysis to a wider range of NLP tasks and contextualisers.
APA, Harvard, Vancouver, ISO, and other styles
39

Southwood, Katherine. "‘You are all quacks; if only you would shut up’ (Job 13.4b–5a): Sin and illness in the sacred and the secular, the ancient and the modern." Theology 121, no. 2 (February 23, 2018): 84–91. http://dx.doi.org/10.1177/0040571x17740523.

Full text
Abstract:
This article focuses on the theme of illness within the dialogue between the character of Job and his ‘friends’ (Job 3—37). It looks specifically at the different explanatory models used by the characters to interpret and contextualize Job’s condition and explores language of sin and blame in illness. A key contribution of this article is to highlight the problematic nature of moralizing and searching for meaning during illness and to emphasize the need for greater empathy.
APA, Harvard, Vancouver, ISO, and other styles
40

Alharbi, Abdullah I., Phillip Smith, and Mark Lee. "Enhancing Contextualised Language Models with Static Character and Word Embeddings for Emotional Intensity and Sentiment Strength Detection in Arabic Tweets." Procedia Computer Science 189 (2021): 258–65. http://dx.doi.org/10.1016/j.procs.2021.05.089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Scheidel, Walter. "The Greek demographic expansion: models and comparisons." Journal of Hellenic Studies 123 (November 2003): 120–40. http://dx.doi.org/10.2307/3246263.

Full text
Abstract:
AbstractFor much of the first millennium BC, the number of Greeks increased considerably, both in the Aegean core and in the expanding periphery of the Mediterranean and Black Sea regions. This paper is the first attempt to establish a coherent quantitative framework for the study of this process. In the first section, I argue that despite the lack of statistical data, it is possible to identify a plausible range of estimates of average long-term demographic growth rates in mainland Greece from the Early Iron Age to the Classical period. Elaborating on this finding, the second section offers a comprehensive rebuttal of the notion of explosive population growth in parts of the eighth and seventh centuries BC. In the third section, I seek to determine the probable scale and demographic consequences of Greek settlement overseas. A brief preliminary look at the relationship between population growth and the quality of life concludes my survey. The resultant series of interlocking parametric models is meant to contextualize the demographic development of ancient Greece within the wider ambit of pre-modern demography, and to provide a conceptual template for future research in this area.
APA, Harvard, Vancouver, ISO, and other styles
42

Gibbs, Levi S. "Going Beyond the Western Pass: Chinese Folk Models of Danger and Abandonment in Songs of Separation." Modern China 46, no. 5 (September 8, 2019): 490–520. http://dx.doi.org/10.1177/0097700419874888.

Full text
Abstract:
From the early Qing dynasty (1644–1911) to the beginning of the People’s Republic, men in northern China from drought-prone regions of northwestern Shanxi province and northeastern Shaanxi province would travel beyond the Great Wall to find work in western Inner Mongolia, in a migration known as “going beyond the Western Pass” 走西口. This article analyzes anthologized song lyrics and ethnographic interviews about this migration to explore how songs of separation performed at temple fairs approached danger and abandonment using traditional metaphors and “folk models” similar to those of parents protecting children from life’s hazards and widows and widowers lamenting the loss of loved ones. I argue that these duets between singers embodying the roles of migrant laborers and the women they left behind provided a public language for audiences to reflect upon and contextualize private emotions in a broader social context, offering rhetorical resolutions to ambivalent anxieties.
APA, Harvard, Vancouver, ISO, and other styles
43

Gibbs, Levi S. "Going Beyond the Western Pass: Chinese Folk Models of Danger and Abandonment in Songs of Separation." Modern China 47, no. 2 (February 1, 2021): 178–207. http://dx.doi.org/10.1177/0097700419860417.

Full text
Abstract:
From the early Qing dynasty (1644–1911) to the beginning of the People’s Republic, men in northern China from drought-prone regions of northwestern Shanxi province and northeastern Shaanxi province would travel beyond the Great Wall to find work in western Inner Mongolia, in a migration known as “going beyond the Western Pass” 走西口. This article analyzes anthologized song lyrics and ethnographic interviews about this migration to explore how songs of separation performed at temple fairs approached danger and abandonment using traditional metaphors and “folk models” similar to those of parents protecting children from life’s hazards and widows and widowers lamenting the loss of loved ones. I argue that these duets between singers embodying the roles of migrant laborers and the women they left behind provided a public language for audiences to reflect upon and contextualize private emotions in a broader social context, offering rhetorical resolutions to ambivalent anxieties.
APA, Harvard, Vancouver, ISO, and other styles
44

Morgan-Short, Kara. "Insights into the neural mechanisms of becoming bilingual: A brief synthesis of second language research with artificial linguistic systems." Bilingualism: Language and Cognition 23, no. 1 (November 5, 2019): 87–91. http://dx.doi.org/10.1017/s1366728919000701.

Full text
Abstract:
AbstractArtificial linguistic systems can offer researchers test tube-like models of second language (L2) acquisition through which specific questions can be examined under tightly controlled conditions. This paper examines what research with artificial linguistic systems has revealed about the neural mechanisms involved in L2 grammar learning. It first considers the validity of meaningful and non-meaningful artificial linguistic systems. Then it contextualizes and synthesizes neural artificial linguistic system research related to questions about age of exposure to the L2, type of exposure, and online L2 learning mechanisms. Overall, using artificial linguistic systems seems to be an effective and productive way of developing knowledge about L2 neural processes and correlates. With further validation, artificial linguistic system paradigms may prove an important tool more generally in understanding how individuals learn new linguistic systems as they become bilingual.
APA, Harvard, Vancouver, ISO, and other styles
45

MacLeavy, Julie. "The Six Dimensions of New Labour: Structures, Strategies, and Languages of Neoliberal Legitimacy." Environment and Planning A: Economy and Space 39, no. 7 (July 2007): 1715–34. http://dx.doi.org/10.1068/a38206a.

Full text
Abstract:
This paper explores New Labour's emerging political economy using Jessop's six dimensions of the state as a heuristic device. In pointing towards the contextualised and institutionalised nature of sociopolitical action, the six dimensions of the state model is posited as a means of unravelling the constitution and ordering of the state within contemporary society. Specifically, the paper focuses on the potential of the six-dimension model to frame analysis of the strategic use of language in contemporary governance. This is illustrated through the model's application to New Labour's political regime as manifested in key government practices and policies, and the extension of the model to include analysis of the linguistic representation of the shift from a Keynesian welfare national state to a Schumpeterian workfare postnational regime.
APA, Harvard, Vancouver, ISO, and other styles
46

Vitral, Letícia, and João Queiroz. "The epistemic role of intermedial visual artworks: An analysis of the photobooks Palast der Republik and Domesticidades." Sign Systems Studies 46, no. 1 (May 7, 2018): 64–89. http://dx.doi.org/10.12697/sss.2018.46.1.03.

Full text
Abstract:
This paper presents, describes and analyses two photobooks: Palast der Republik and Domesticidades. We claim that, because of their highly iconic features, they can be regarded as epistemic artefacts (models) since they reveal information about their objects, as well as about their own morphological properties. The analysis focuses on (i) the kind of relations the photobooks establish with their respective objects (we claim that it is a mainly-iconic relation) and (ii) on the semiotic couplings that can be found in them – a type of interaction between semiotic resources (such as photographs, maps, written texts, illustrations, among others). We contextualize this analysis in relation to both a semiotic and an intermedial background.Further, we claim that the epistemic role of such artworks is directly related to their material and structural features that constrain the possibilities of manipulation and reasoning upon them. We conclude by presenting some of the information that was revealed by the manipulation of these photobooks, claiming that the semioticartefactual approach to models can be an epistemically interesting conceptual frame to thinking about artistic artefacts.
APA, Harvard, Vancouver, ISO, and other styles
47

Günther, Clemens. "Von Tauwetter und Tiefdruckgebieten." Zeitschrift für Slawistik 66, no. 2 (June 1, 2021): 323–50. http://dx.doi.org/10.1515/slaw-2021-0015.

Full text
Abstract:
Summary Based on recent studies on ‘literary meteorology’, the article examines depictions of meteorology in Soviet literature. It contextualizes Daniil Granin’s Into the Storm (1962) and Anatolii Gladilin’s Forecast for tomorrow (1972) within the post-war history of meteorology and reads both texts as examples for a ‘popular meteorology’, in which important shifts in the Soviet culture of science can be detected. In difference to political readings of late Soviet prose on science, it holds, that literary texts can provide valuable insights into shifts of styles on thinking, the praxeology of science, its anthropological implications and into models of scientific evolution.
APA, Harvard, Vancouver, ISO, and other styles
48

Grudev, Lachezar. "Friedrich A. Lutz’ Epistemological and Methodological Messages During the German-Language Business Cycle Debate." Journal of Contextual Economics – Schmollers Jahrbuch 139, no. 1 (January 1, 2019): 1–28. http://dx.doi.org/10.3790/schm.139.1.1.

Full text
Abstract:
Friedrich A. Lutz’ 1932 habilitation thesis is considered the last highlight in a German-language business cycle debate that took place during the interwar period. This debate, initiated by Adolf Löwe, concentrated on the necessary conditions in defining a dynamic theory that should explain the business cycle understood as a dynamic disequilibrium phenomenon in a deductive way. This article contributes to Lutz scholarship by focusing on Lutz’ criticism of Clément Juglar’s “unconditional” observations. This constituted the basis for the problematic concept of wave-like fluctuation subsequently adopted by the Historical School and Joseph Schumpeter. I establish a relationship between Lutz’ criticism and his statement that this perspective does not find support in economic history. Lutz asserted that each crisis represents a unique historical phenomenon caused by specific factors whose impact on the economy depends on its institutional framework. From this, I derive an epistemological claim, namely that the equilibrium tendencies within the market order should be the subject of inquiry, and a methodological call, namely the development of models showing hypothetically what factors can disturb these tendencies. The paper contextualizes Lutz’ criticism and messages into the formation of the Freiburg School’s research program.
APA, Harvard, Vancouver, ISO, and other styles
49

GREENMAN, CAROLINE. "Coaching Academic English through voice and text production models." ReCALL 16, no. 1 (May 2004): 51–70. http://dx.doi.org/10.1017/s0958344004000515.

Full text
Abstract:
We report on how technological developments have enabled us to change our concepts and practices regarding voice and text coaching and how this in turn has raised the level of literary competence among non-native doctoral students seeking publication in English in scientific journals. We describe models for marking, peer reviewing and coaching spoken delivery and written text. Our models spring from our dedicated physical CALL environment and take into account learner expectations and further develop tangible learner strategies. As our models are applied in an open learning platform they are accessible, interactive and facilitate both differentiated progressive feedback and student profiling. The four skills are revisited through very traditional means in a methodological paradigm requiring some ‘new literacy’. Between 1997 and 2000 we were devoted to developing and testing our dedicated physical CALL classroom model; in the period 2000–2003 we have focused on both sustaining this and improving our procedures. Refining the coaching and interactive feedback procedures for both text and voice development within the virtual classroom model (established at the Institute for Living Languages at KULeuven in 1997) informs the focus of our research. During the latter period, the resulting models have been rigorously tested by about three hundred KULeuven students, half of whom are post graduates and half of whom are undergraduates. The specific need for refined coaching and feedback for doctoral students is first defined, then the concept, procedure and results of three models are outlined and illustrated. The models include a text marking and coaching model, a speech marking and coaching model and a model to contextualise and manage the interactive cycle of learner, peer and coach writing and speaking processes. Key to our findings is the fact that our models help us to help learners differentiate between passive and active retrieval, plus transfer issues versus knowledge gap issues. The discussion centres on further model development integration.
APA, Harvard, Vancouver, ISO, and other styles
50

Chimah, Jonathan N., and Friday Ibiam Ude. "Current trends in information retrieval systems: review of fuzzy set theory and fuzzy Boolean retrieval models." Journal of Library Services and Technologies 2, no. 2 (June 2020): 48–56. http://dx.doi.org/10.47524/jlst.v2i2.5.

Full text
Abstract:
This paper reviews the concept and goal of Information Retrieval Systems (IRSs). It also explains the synonymous concepts in Information Retrieval (IR)which include such terms as: imprecision, vagueness, uncertainty, and inconsistency. Current trends in IRSs are discussed. Fuzzy Set Theory, Fuzzy Retrieval Modelsare reviewed. The paper also discusses extensions of Fuzzy Boolean Retrieval Models including Fuzzy techniques for documents’ indexingandFlexible query languages. Fuzzy associative mechanisms were identified to include:(1)fuzzy pseudothesauri and fuzzy ontologies which can be used to contextualize the search by expanding the set of index terms of documents;(2)an alternative use of fuzzy pseudothesarui and fuzzy ontologies is to expand the query with related terms by taking into account their varying importance of an additional termand (3)fuzzy clustering techniques, where each document can be placed within several clusters with a given strength of belonging to each cluster, can be used to expand the set of the documents retrieved in response to a query.The paper concludesby recommending that in an electronic library environment, the librarians and information scientists should acquaint themselves with these terms in order to be more equipped in helping library users retrieve online documents relevant to their information needs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography