Academic literature on the topic 'Language processing tasks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Language processing tasks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Language processing tasks"

1

Sun, Tian-Xiang, Xiang-Yang Liu, Xi-Peng Qiu, and Xuan-Jing Huang. "Paradigm Shift in Natural Language Processing." Machine Intelligence Research 19, no. 3 (May 28, 2022): 169–83. http://dx.doi.org/10.1007/s11633-022-1331-6.

Full text
Abstract:
AbstractIn the era of deep learning, modeling for most natural language processing (NLP) tasks has converged into several mainstream paradigms. For example, we usually adopt the sequence labeling paradigm to solve a bundle of tasks such as POS-tagging, named entity recognition (NER), and chunking, and adopt the classification paradigm to solve tasks like sentiment analysis. With the rapid progress of pre-trained language models, recent years have witnessed a rising trend of paradigm shift, which is solving one NLP task in a new paradigm by reformulating the task. The paradigm shift has achieved great success on many tasks and is becoming a promising way to improve model performance. Moreover, some of these paradigms have shown great potential to unify a large number of NLP tasks, making it possible to build a single model to handle diverse tasks. In this paper, we review such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks.
APA, Harvard, Vancouver, ISO, and other styles
2

Kachkou, Dz I. "Applying the language acquisition model to the solution small language processing tasks." Informatics 19, no. 1 (January 5, 2022): 96–110. http://dx.doi.org/10.37661/1816-0301-2022-19-1-96-110.

Full text
Abstract:
The problem of building a computer model of a small language was under solution. The relevance of this task is due to the following considerations: the need to eliminate the information inequality between speakers of different languages; the need for new tools for the study of poorly understood languages, as well as innovative approaches to language modeling in the low-resource context; the problem of supporting and developing small languages.There are three main objectives in solving the problem of small natural language processing at the stage of describing the problem situation: to justify the problem of modeling language in the context of resource scarcity as a special task in the field of natural languages processing, to review the literature on the relevant topic, to develop the concept of language acquisition model with a relatively small number of available resources. Computer modeling techniques using neural networks, semi-supervised learning and reinforcement learning were involved.The paper provides a review of the literature on modeling the learning of vocabulary, morphology, and grammar of a child's native language. Based on the current understanding of the language acquisition and existing computer models of this process, the architecture of the system of small language processing, which is taught through modeling of ontogenesis, is proposed. The main components of the system and the principles of their interaction are highlighted. The system is based on a module built on the basis of modern dialogical language models and taught in some rich-resources language (e.g., English). During training, an intermediate layer is used which represents statements in some abstract form, for example, in the symbols of formal semantics. The relationship between the formal recording of utterances and their translation into the target low-resource language is learned by modeling the child's acquisition of vocabulary and grammar of the language. One of components stands for the non-linguistic context in which language learning takes place.This article explores the problem of modeling small languages. A detailed substantiation of the relevance of modeling small languages is given: the social significance of the problem is noted, the benefits for linguistics, ethnography, ethnology and cultural anthropology are shown. The ineffectiveness of approaches applied to large languages in conditions of a lack of resources is noted. A model of language learning by means of ontogenesis simulation is proposed, which is based both on the results obtained in the field of computer modeling and on the data of psycholinguistics.
APA, Harvard, Vancouver, ISO, and other styles
3

Xiao, Yijun, and William Yang Wang. "Quantifying Uncertainties in Natural Language Processing Tasks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7322–29. http://dx.doi.org/10.1609/aaai.v33i01.33017322.

Full text
Abstract:
Reliable uncertainty quantification is a first step towards building explainable, transparent, and accountable artificial intelligent systems. Recent progress in Bayesian deep learning has made such quantification realizable. In this paper, we propose novel methods to study the benefits of characterizing model and data uncertainties for natural language processing (NLP) tasks. With empirical experiments on sentiment analysis, named entity recognition, and language modeling using convolutional and recurrent neural network models, we show that explicitly modeling uncertainties is not only necessary to measure output confidence levels, but also useful at enhancing model performances in various NLP tasks.
APA, Harvard, Vancouver, ISO, and other styles
4

Hwa-Froelich, Deborah A., and Hisako Matsuo. "Vietnamese Children and Language-Based Processing Tasks." Language, Speech, and Hearing Services in Schools 36, no. 3 (July 2005): 230–43. http://dx.doi.org/10.1044/0161-1461(2005/023).

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Belov, Serey, Daria Zrelova, Petr Zrelov, and Vladimir Korenkov. "Overview of methods for automatic natural language text processing." System Analysis in Science and Education, no. 3 (2020) (September 30, 2020): 8–22. http://dx.doi.org/10.37005/2071-9612-2020-3-8-22.

Full text
Abstract:
This paper provides a brief overview of modern methods and approaches used for automatic processing of text information. In English-language literature, this area of science is called NLP-Natural Language Processing. The very name suggests that the subject of analysis (and for many tasks – and synthesis) are materials presented in one of the natural languages (and for a number of tasks – in several languages simultaneously), i.e. national languages of communication between people. Programming languages are not included in this group. In Russian-language literature, this area is called Computer (or mathematical) linguistics. NLP (computational linguistics) usually includes speech analysis along with text analysis, but in this review speech analysis does not consider. The review used materials from original works, monographs, and a number of articles published the «Open Systems.DBMS» journal.
APA, Harvard, Vancouver, ISO, and other styles
6

Roh, Jihyeon, Sungjin Park, Bo-Kyeong Kim, Sang-Hoon Oh, and Soo-Young Lee. "Unsupervised multi-sense language models for natural language processing tasks." Neural Networks 142 (October 2021): 397–409. http://dx.doi.org/10.1016/j.neunet.2021.05.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Veldhuis, Dorina, and Jeanne Kurvers. "Offline segmentation and online language processing units." Units of Language – Units of Writing 15, no. 2 (August 10, 2012): 165–84. http://dx.doi.org/10.1075/wll.15.2.03vel.

Full text
Abstract:
Although metalinguistic (‘offline’) awareness of words as linguistic units has been related to literacy, it is still uncertain whether literacy also affects the units of language that people process unconsciously (‘online’). In this contribution, we first discuss the characteristics of offline and online tasks, opening up the perspective that such tasks vary in nature along a continuum ranging from more offline to more online. Subsequently, we present a study employing three relatively more offline and two more online tasks which we conducted among 83 preliterate and 121 literate children at Dutch primary schools. The results of the more offline tasks reveal a significant influence of literacy on segmentation along word-boundaries, while the results from the relatively more online tasks are less clear-cut with respect to the way in which literacy affects language processing. Keywords: literacy; language acquisition; metalinguistic awareness; online and offline task; word-segmentation; language processing; developmental psycholinguistics; Dutch
APA, Harvard, Vancouver, ISO, and other styles
8

Commissaire, Eva, Adrian Pasquarella, Becky Xi Chen, and S. Hélène Deacon. "The development of orthographic processing skills in children in early French immersion programs." Written Language and Literacy 17, no. 1 (April 11, 2014): 16–39. http://dx.doi.org/10.1075/wll.17.1.02com.

Full text
Abstract:
Children learning to read in two languages are faced with orthographic features from both languages, either unique to a language or similar across languages. In the present study, we examined how children develop orthographic processing skills over time (from grade 1 to grade 2) with a sample of Canadian children attending a French immersion program and we investigated the underlying factor structure of orthographic skills across English and French. Two orthographic processing tasks were administered in both languages: lexical orthographic processing (e.g. choose the correct spelling from people–peeple) and sub-lexical orthographic processing (e.g. which is the more word-like vaid–vayd?), which included both language-specific and language-shared orthographic regularities. Children’s performances in sub-lexical tasks increased with grade but were comparable across languages. Further, evidence for a one factor model including all measures suggested that there is a common underlying orthographic processing skill that cuts across measurement and language variables. Keywords: orthographic processing; reading; French immersion; bilinguals; second language learners
APA, Harvard, Vancouver, ISO, and other styles
9

Mihaljević Djigunović, Jelena. "Language anxiety and language processing." EUROSLA Yearbook 6 (July 20, 2006): 191–212. http://dx.doi.org/10.1075/eurosla.6.12mih.

Full text
Abstract:
This paper focuses on two studies into the effects of language anxiety on language processing. Using samples of Croatian L1 — English L2 speakers performing two picture description tasks (one in L1 and one in L2), the studies analysed their oral productions in order to identify a number of temporal and hesitation signals of planning processes. The findings suggest that observing learners using audio and video equipment and trying to increase their anxiety through interpersonal style does not produce a significant difference. However, learners watching someone apparently taking notes on their performance seemed to be significantly anxiety provoking. Qualitative analysis suggests that, in comparison with low anxiety language users, high anxiety language users produce longer texts in L2 than in L1, produce smaller amounts of continuous speech in both L1 and L2, produce filled pauses with a higher mean length in L2 than in L1, have longer mid-clause pauses, fewer repetitions, and make more false starts.
APA, Harvard, Vancouver, ISO, and other styles
10

Jagaralmudi, Jagadeesh, Seth Juarez, and Hal Daume. "Kernelized Sorting for Natural Language Processing." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 4, 2010): 1020–25. http://dx.doi.org/10.1609/aaai.v24i1.7718.

Full text
Abstract:
Kernelized sorting is an approach for matching objects from two sources (or domains) that does not require any prior notion of similarity between objects across the two sources. Unfortunately, this technique is highly sensitive to initialization and high dimensional data. We present variants of kernelized sorting to increase its robustness and performance on several Natural Language Processing (NLP) tasks: document matching from parallel and comparable corpora, machine transliteration and even image processing. Empirically we show that, on these tasks, a semi-supervised variant of kernelized sorting outperforms matching canonical correlation analysis.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Language processing tasks"

1

Medlock, Benjamin William. "Investigating classification for natural language processing tasks." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.611949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dyson, Lucy. "Insights into language processing in aphasia from semantic priming and semantic judgement tasks." Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/19144/.

Full text
Abstract:
The nature of semantic impairment in people with aphasia (PWA) provides the background to the current study, which examines whether different methods of semantic assessment can account for such deficits. Cognitive ability, which has previously been linked to language ability in PWA, may impact on test performance and was therefore also examined. The aims of the current study were to compare performance of control participants and PWA on implicit and explicit assessment of semantics, and to relate it to performance on tests of cognition. The impact of semantically similar versus associative relationship types between test stimuli was also considered. Three experimental semantic tasks were developed, including one implicit measure of semantic processing (Semantic Priming) and two explicit measures (Word to Picture Verification and Word to Picture Matching). Test stimuli were matched in terms of key psycholinguistic variables of frequency, imageability and length, and other factors including visual similarity, semantic similarity, and association. Performance of 40 control participants and 20 PWA was investigated within and between participant groups. The relationship between semantic task performance and existing semantic and cognitive assessments was also explored in PWA. An important finding related to a subgroup of PWA who were impaired on the explicit experimental semantic tasks but demonstrated intact semantic processing via the implicit method. Within tasks some differences were found in the effects of semantically related or associated stimuli. No relationships were found between experimental semantic task performance and cognitive task accuracy. The research offers insights into the role of implicit language testing, the impact of stimuli relationship type, and the complex relationship between semantic processing and cognition. The findings underline the need for valid and accurate measures of semantic processing to be in place to enable accurate diagnosis for PWA, in order to direct appropriate intervention choice and facilitate successful rehabilitation.
APA, Harvard, Vancouver, ISO, and other styles
3

Zahidin, Ahmad Zamri. "Using Ada tasks (concurrent processing) to simulate a business system." Virtual Press, 1988. http://liblink.bsu.edu/uhtbin/catkey/539634.

Full text
Abstract:
Concurrent processing has always been a traditional problem in developing operating systems. Today, concurrent algorithms occur in many application areas such as science and engineering, artificial intelligence, business systems databases, and many more. The presence of concurrent processing facilities allows the natural expression of these algorithms as concurrent programs. This is a very distinct advantage if the underlying computer offers parallelism. On the other hand, the lack of concurrent processing facilities forces these algorithms to be written as sequential programs, thus, destroying the structure of the algorithms and making them hard to understand and analyze.The first major programming language that offers high-level concurrent processing facilities is Ada. Ada is a complex, general purpose programming language that provides an excellent concurrent programming facility called task that is based on rendezvous concept. In this study, concurrent processing is practiced by simulating a business system using Ada language and its facilities.A warehouse (the business system) consists of a number of employees purchases microwave ovens from various vendors and distributes them to several retailers. Simulation of activities in the system is carried over by assigning each employee to a specific task and all tasks run simultaneously. The programs. written for this business system produce transactions and financial statements of a typical business day. They(programs) are also examining the behavior of activities that occur simultaneously. The end results show that concurrency and Ada work efficiently and effectively.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
4

Laws, Florian [Verfasser], and Hinrich [Akademischer Betreuer] Schütze. "Effective active learning for complex natural language processing tasks / Florian Laws. Betreuer: Hinrich Schütze." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2013. http://d-nb.info/1030521204/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lorello, Luca Salvatore. "Small transformers for Bioinformatics tasks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23883/.

Full text
Abstract:
Recent trends in bioinformatics are trying to align the techniques to more modern approaches based on statistical natural language processing and deep learning, however state-of-the-art neural natural language processing techniques remain relatively unexplored in this domain. Large models are capable of achieving state-of-the-art performances, however, a typical bioinformatics lab has limited hardware resources. For this reason, this thesis focuses on small architectures, the training of which can be performed in a reasonable amount of time, while trying to limit or even negate the performance loss compared to SOTA. In particular, sparse attention mechanisms (such as the one proposed by Longformer) and parameter sharing techniques (such as the one proposed by Albert) are jointly explored with respect to two genetic languages: human genome and eukaryotic mitochondrial genome of 2000+ different species. Contextual embeddings for each token are learned via pretraining on a language understanding task, both in RoBERTa and Albert styles to highlight differences in performance and training efficiency. The learned contextual embeddings are finally exploited for fine tuning a task of localization (transcription start site in human promoters) and two tasks of sequence classification (12S metagenomics in fishes and chromatin profile prediction, single-class and multi-class respectively). Using smaller architectures, near SOTA performances are achieved in all the tasks already explored in literature, and a new SOTA has been established for the other tasks. Further experiments with larger architectures consistently improved the previous SOTA for every task.
APA, Harvard, Vancouver, ISO, and other styles
6

Curiel, Diaz Arturo Tlacaélel. "Using formal logic to represent sign language phonetics in semi-automatic annotation tasks." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30308/document.

Full text
Abstract:
Cette thèse présente le développement d'un framework formel pour la représentation des Langues de Signes (LS), les langages des communautés Sourdes, dans le cadre de la construction d'un système de reconnaissance automatique. Les LS sont de langues naturelles, qui utilisent des gestes et l'espace autour du signeur pour transmettre de l'information. Cela veut dire que, à différence des langues vocales, les morphèmes en LS ne correspondent pas aux séquences de sons; ils correspondent aux séquences de postures corporelles très spécifiques, séparés par des changements tels que de mouvements. De plus, lors du discours les signeurs utilisent plusieurs parties de leurs corps (articulateurs) simultanément, ce qui est difficile à capturer avec un système de notation écrite. Cette situation difficulté leur représentation dans de taches de Traitement Automatique du Langage Naturel (TALN). Pour ces raisons, le travail présenté dans ce document a comme objectif la construction d'une représentation abstraite de la LS; plus précisément, le but est de pouvoir représenter des collections de vidéo LS (corpus) de manière formelle. En générale, il s'agit de construire une couche de représentation intermédiaire, permettant de faire de la reconnaissance automatique indépendamment des technologies de suivi et des corpus utilisés pour la recherche. Cette couche corresponde à un système de transition d'états (STE), spécialement crée pour représenter la nature parallèle des LS. En plus, elle peut-être annoté avec de formules logiques pour son analyse, à travers de la vérification de modèles. Pour représenter les propriétés à vérifier, une logique multi-modale a été choisi : la Logique Propositionnelle Dynamique (PDL). Cette logique a été originalement crée pour la spécification de programmes. De manière plus précise, PDL permit d'utilise des opérateurs modales comme [a] et , représentant <> et <>, respectivement. Une variante particulaire a été développée pour les LS : la PDL pour Langue de Signes (PDLSL), qui est interprété sur des STE représentant des corpus. Avec PDLSL, chaque articulateur du corps (comme les mains et la tête) est vu comme un agent indépendant; cela veut dire que chacun a ses propres actions et propositions possibles, et qu'il peux les exécuter pour influencer une posture gestuelle. L'utilisation du framework proposé peut aider à diminuer deux problèmes importantes qui existent dans l'étude linguistique des LS : hétérogénéité des corpus et la manque des systèmes automatiques d'aide à l'annotation. De ce fait, un chercheur peut rendre exploitables des corpus existants en les transformant vers des STE. Finalement, la création de cet outil à permit l'implémentation d'un système d'annotation semi-automatique, basé sur les principes théoriques du formalisme. Globalement, le système reçoit des vidéos LS et les transforme dans un STE valide. Ensuite, un module fait de la vérification formelle sur le STE, en utilisant une base de données de formules crée par un expert en LS. Les formules représentent des propriétés lexicales à chercher dans le STE. Le produit de ce processus, est une annotation qui peut être corrigé par des utilisateurs humains, et qui est utilisable dans des domaines d'études tels que la linguistique
This thesis presents a formal framework for the representation of Signed Languages (SLs), the languages of Deaf communities, in semi-automatic recognition tasks. SLs are complex visio-gestural communication systems; by using corporal gestures, signers achieve the same level of expressivity held by sound-based languages like English or French. However, unlike these, SL morphemes correspond to complex sequences of highly specific body postures, interleaved with postural changes: during signing, signers use several parts of their body simultaneously in order to combinatorially build phonemes. This situation, paired with an extensive use of the three-dimensional space, make them difficult to represent with tools already existent in Natural Language Processing (NLP) of vocal languages. For this reason, the current work presents the development of a formal representation framework, intended to transform SL video repositories (corpus) into an intermediate representation layer, where automatic recognition algorithms can work under better conditions. The main idea is that corpora can be described with a specialized Labeled Transition System (LTS), which can then be annotated with logic formulae for its study. A multi-modal logic was chosen as the basis of the formal language: the Propositional Dynamic Logic (PDL). This logic was originally created to specify and prove properties on computer programs. In particular, PDL uses the modal operators [a] and to denote necessity and possibility, respectively. For SLs, a particular variant based on the original formalism was developed: the PDL for Sign Language (PDLSL). With the PDLSL, body articulators (like the hands or head) are interpreted as independent agents; each articulator has its own set of valid actions and propositions, and executes them without influence from the others. The simultaneous execution of different actions by several articulators yield distinct situations, which can be searched over an LTS with formulae, by using the semantic rules of the logic. Together, the use of PDLSL and the proposed specialized data structures could help curb some of the current problems in SL study; notably the heterogeneity of corpora and the lack of automatic annotation aids. On the same vein, this may not only increase the size of the available datasets, but even extend previous results to new corpora; the framework inserts an intermediate representation layer which can serve to model any corpus, regardless of its technical limitations. With this, annotations is possible by defining with formulae the characteristics to annotate. Afterwards, a formal verification algorithm may be able to find those features in corpora, as long as they are represented as consistent LTSs. Finally, the development of the formal framework led to the creation of a semi-automatic annotator based on the presented theoretical principles. Broadly, the system receives an untreated corpus video, converts it automatically into a valid LTS (by way of some predefined rules), and then verifies human-created PDLSL formulae over the LTS. The final product, is an automatically generated sub-lexical annotation, which can be later corrected by human annotators for their use in other areas such as linguistics
APA, Harvard, Vancouver, ISO, and other styles
7

Milajevs, Dmitrijs. "A study of model parameters for scaling up word to sentence similarity tasks in distributional semantics." Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/36225.

Full text
Abstract:
Representation of sentences that captures semantics is an essential part of natural language processing systems, such as information retrieval or machine translation. The representation of a sentence is commonly built by combining the representations of the words that the sentence consists of. Similarity between words is widely used as a proxy to evaluate semantic representations. Word similarity models are well-studied and are shown to positively correlate with human similarity judgements. Current evaluation of models of sentential similarity builds on the results obtained in lexical experiments. The main focus is how the lexical representations are used, rather than what they should be. It is often assumed that the optimal representations for word similarity are also optimal for sentence similarity. This work discards this assumption and systematically looks for lexical representations that are optimal for similarity measurement between sentences. We find that the best representation for word similarity is not always the best for sentence similarity and vice versa. The best models in word similarity tasks perform best with additive composition. However, the best result on compositional tasks is achieved with Kroneckerbased composition. There are representations that are equally good in both tasks when used with multiplicative composition. The systematic study of the parameters of similarity models reveals that the more information lexical representations contain, the more attention should be paid to noise. In particular, the word vectors in models with the feature size at the magnitude of the vocabulary size should be sparse, but if a small number of context features is used then the vectors should be dense. Given the right lexical representations, compositional operators achieve state-of-the-art performance, improving over models that use neural-word embeddings. To avoid overfitting, either several test datasets should be used or parameter selection should be based on parameters' average behaviours.
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Hadlaq, Mohammed S. "Retention of words learned incidentally by Saudi EFL learners through working on vocabulary learning tasks constructed to activate varying depths of processing." Virtual Press, 2003. http://liblink.bsu.edu/uhtbin/catkey/1263891.

Full text
Abstract:
This study investigated the effectiveness of four vocabulary learning tasks on 104 Saudi EFL learners' retention of ten previously unencountered lexical items. These four tasks were: 1) writing original sentences (WS), 2) writing an original text (i.e. composition) (WT), 3) filling-in-the-blank of single sentences (FS), and 4) filling-in-the-lank of a text (FT). Different results were obtained depending on whether the amount of time required by these tasks was considered in the analysis or not. When time was not considered in the analysis, the WT group outperformed the other groups while the FS group obtained the lowest score. No significant differences were found between WS and FT. The picture, however, changed dramatically when time was considered in the analysis. The analysis of ratio of score to time taken revealed no significant differences between the four groups except between FT and FS, and it was in favor of FT. The differences in vocabulary gains between the four groups were ascribed to the level (or depth) of processing these tasks required the subjects to do and to the richness of the context available in two of the four exercises, namely WT and FT. The researcher concluded that composition writing was the most helpful task for vocabulary retention and also for general language learning, followed by FT. Sentence fill-in was considered the least useful activity in this regard.
Department of English
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Charles L. "Neural Network Models for Tasks in Open-Domain and Closed-Domain Question Answering." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1578592581367428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Malapetsa, Christina. "Stroop tasks with visual and auditory stimuli : How different combinations of spoken words, written words, images and natural sounds affect reaction times." Thesis, Stockholms universitet, Institutionen för lingvistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-185057.

Full text
Abstract:
The Stroop effect is the delay in reaction times due to interference. Since the original experiments of 1935, it has been used primarily in linguistic context. Language is a complex skill unique to humans, which involves a large part of the cerebral cortex and many subcortical regions. It is perceived primarily in auditory form (spoken) and secondarily in visual form (written), but it is also always perceived in representational form (natural sounds, images, smells, etc). Auditory signals are processed much faster than visual signals, and the language processing centres are closer to the primary auditory cortex than the primary visual cortex, but due to the integration of stimuli and the role of the executive functions, we are able to perceive both simultaneously and coherently. However, auditory signals are still processed faster, and this study focused on establishing how auditory and visual, linguistic and representational stimuli interact with each other and affect reaction times in four Stroop tasks with four archetypal mammals (dog, cat, mouse and pig): a written word against an image, a spoken word against an image, a written word against a natural sound and a spoken word against a natural sound. Four hypotheses were tested: in all tasks reaction times would be faster when the stimuli were congruent (Stroop Hypothesis); reaction times would be faster when both stimuli are auditory than when they are visual (Audiovisual Hypothesis); reaction times would be similar in the tasks where one stimulus is auditory and the other visual (Similarity Hypothesis); finally, reaction times would be slower when stimuli come from two sources than when they come from one source (Attention Hypothesis). Twelve native speakers of Swedish between the ages of 22 and 40 participated. The experiment took place in the EEG lab of the Linguistics Department of Stockholm University. The same researcher (the author) and equipment was used for all participants. The results confirmed the Stroop Hypothesis, did not confirm the Audiovisual and Similarity Hypothesis, and the results of the Attention Hypothesis were mixed. The somewhat controversial results were mostly attributed to a false initial assumption, namely that having two different auditory stimuli (one on each ear) was considered one source of stimuli, and possibly the poor quality of some natural sounds. With this additional consideration, the results seemed to be in accord with previous research. Future research could focus on more efficient ways to test the reaction times of Stroop tasks involving auditory and visual stimuli, as well as different populations, especially neurodiverse and bilingual populations.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Language processing tasks"

1

Dellegrotto, John. Computerizing administrative tasks in schools. Rockville, Md. (10801 Rockville Pike, Rockville 20852): American Speech-Language-Hearing Association, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Processing perspectives on task performance. Amsterdam: John Benjamins Publishing Company, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Klose, G. Task-oriented modeling for natural language processing systems. Berlin: Technische Universität Berlin, Fachbereich 13--Informatik, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Auditory monitoring: On the processing of task-irrelevant ignored spoken language and non-language sounds. Leipzig: Leipziger Universitätsverlag, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jacobsen, Thomas. Auditory monitoring: On the processing of task-irrelevant ignored spoken language and non-language sounds. Leipzig: Leipziger Universitätsverlag, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bach, Carlo. An interactive knowledge-based shell for configuration tasks. Konstanz: Hartung-Gorre, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Antić, Zhenya. Python Natural Language Processing Cookbook: Over 50 recipes to understand, analyze, and generate text for implementing language processing tasks. Packt Publishing, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lopatenko, Andrei, and Thushan Ganegedara. Natural Language Processing with TensorFlow: The Definitive NLP Book to Implement the Most Sought-After Machine Learning Models and Tasks. Packt Publishing, Limited, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Harnish, Stacy M. Anomia and Anomic Aphasia: Implications for Lexical Processing. Edited by Anastasia M. Raymer and Leslie J. Gonzalez Rothi. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199772391.013.7.

Full text
Abstract:
Anomia is a term that describes the inability to retrieve a desired word, and is the most common deficit present across different aphasia syndromes. Anomic aphasia is a specific aphasia syndrome characterized by a primary deficit of word retrieval with relatively spared performance in other language domains, such as auditory comprehension and sentence production. Damage to a number of cognitive and motor systems can produce errors in word retrieval tasks, only subsets of which are language deficits. In the cognitive and neuropsychological underpinnings section, we discuss the major processing steps that occur in lexical retrieval and outline how deficits at each of the stages may produce anomia. The neuroanatomical correlates section will include a review of lesion and neuroimaging studies of language processing to examine anomia and anomia recovery in the acute and chronic stages. The assessment section will highlight how discrepancies in performance between tasks contrasting output modes and input modalities may provide insight into the locus of impairment in anomia. Finally, the treatment section will outline some of the rehabilitation techniques for forms of anomia, and take a closer look at the evidence base for different aspects of treatment.
APA, Harvard, Vancouver, ISO, and other styles
10

Stevenson, Mark, and Yorick Wilks. Word-Sense Disambiguation. Edited by Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0013.

Full text
Abstract:
Word-sense disambiguation (WSD) is the process of identifying the meanings of words in context. This article begins with discussing the origins of the problem in the earliest machine translation systems. Early attempts to solve the WSD problem suffered from a lack of coverage. The main approaches to tackle the problem were dictionary-based, connectionist, and statistical strategies. This article concludes with a review of evaluation strategies for WSD and possible applications of the technology. WSD is an ‘intermediate’ task in language processing: like part-of-speech tagging or syntactic analysis, it is unlikely that anyone other than linguists would be interested in its results for their own sake. ‘Final’ tasks produce results of use to those without a specific interest in language and often make use of ‘intermediate’ tasks. WSD is a long-standing and important problem in the field of language processing.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Language processing tasks"

1

Daelemans, Walter, Antal Bosch, and Ton Weijters. "Empirical learning of Natural Language Processing tasks." In Machine Learning: ECML-97, 337–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-62858-4_97.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Skehan, Peter. "Chapter 11. Performance on second language speaking tasks." In Bilingual Processing and Acquisition, 211–34. Amsterdam: John Benjamins Publishing Company, 2022. http://dx.doi.org/10.1075/bpa.14.11ske.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lehečka, Jan, and Jan Švec. "Comparison of Czech Transformers on Text Classification Tasks." In Statistical Language and Speech Processing, 27–37. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89579-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fernández, Javi, Ester Boldrini, José Manuel Gómez, and Patricio Martínez-Barco. "Evaluating EmotiBlog Robustness for Sentiment Analysis Tasks." In Natural Language Processing and Information Systems, 290–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22327-3_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Liang, Xiaojing Lu, Chenyang Yuan, Xuanwei Zhang, Hu Yuan, Huilin Xu, Guoao Wei, et al. "Few-Shot Learning for Chinese NLP Tasks." In Natural Language Processing and Chinese Computing, 412–21. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88483-3_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

López-Serrano, Sonia, Julio Roca de Larios, and Rosa M. Manchón. "Chapter 10. Processing output during individual L2 writing tasks." In Writing and Language Learning, 231–54. Amsterdam: John Benjamins Publishing Company, 2020. http://dx.doi.org/10.1075/lllt.56.10lop.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Labadié, Alexandre, and Violaine Prince. "Finding Text Boundaries and Finding Topic Boundaries: Two Different Tasks?" In Advances in Natural Language Processing, 260–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-85287-2_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Filice, Simone, Giuseppe Castellucci, Danilo Croce, and Roberto Basili. "Effective Kernelized Online Learning in Language Processing Tasks." In Lecture Notes in Computer Science, 347–58. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-06028-6_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Çano, Erion, and Maurizio Morisio. "Quality of Word Embeddings on Sentiment Analysis Tasks." In Natural Language Processing and Information Systems, 332–38. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59569-6_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pironkov, Gueorgui, Stéphane Dupont, Sean U. N. Wood, and Thierry Dutoit. "Noise and Speech Estimation as Auxiliary Tasks for Robust Speech Recognition." In Statistical Language and Speech Processing, 181–92. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68456-7_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Language processing tasks"

1

Xia, Mengzhou, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, and Graham Neubig. "Predicting Performance for Natural Language Processing Tasks." In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mathias, Sandeep, Diptesh Kanojia, Abhijit Mishra, and Pushpak Bhattacharya. "A Survey on Using Gaze Behaviour for Natural Language Processing." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/683.

Full text
Abstract:
Gaze behaviour has been used as a way to gather cognitive information for a number of years. In this paper, we discuss the use of gaze behaviour in solving different tasks in natural language processing (NLP) without having to record it at test time. This is because the collection of gaze behaviour is a costly task, both in terms of time and money. Hence, in this paper, we focus on research done to alleviate the need for recording gaze behaviour at run time. We also mention different eye tracking corpora in multiple languages, which are currently available and can be used in natural language processing. We conclude our paper by discussing applications in a domain - education - and how learning gaze behaviour can help in solving the tasks of complex word identification and automatic essay grading.
APA, Harvard, Vancouver, ISO, and other styles
3

Malykh, Valentin. "Robust to Noise Models in Natural Language Processing Tasks." In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-2002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dudhabaware, Rahul S., and Mangala S. Madankar. "Review on natural language processing tasks for text documents." In 2014 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). IEEE, 2014. http://dx.doi.org/10.1109/iccic.2014.7238427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Boros, Tiberiu, Stefan Daniel Dumitrescu, and Sonia Pipa. "Fast and Accurate Decision Trees for Natural Language Processing Tasks." In RANLP 2017 - Recent Advances in Natural Language Processing Meet Deep Learning. Incoma Ltd. Shoumen, Bulgaria, 2017. http://dx.doi.org/10.26615/978-954-452-049-6_016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Suzuki, Jun, Hideki Isozaki, and Eisaku Maeda. "Convolution kernels with feature selection for natural language processing tasks." In the 42nd Annual Meeting. Morristown, NJ, USA: Association for Computational Linguistics, 2004. http://dx.doi.org/10.3115/1218955.1218971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sauchuk, Artsiom, James Thorne, Alon Halevy, Nicola Tonellotto, and Fabrizio Silvestri. "On the Role of Relevance in Natural Language Processing Tasks." In SIGIR '22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3477495.3532034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Bill Yuchen, Chaoyang He, Zihang Ze, Hulin Wang, Yufen Hua, Christophe Dupuy, Rahul Gupta, Mahdi Soltanolkotabi, Xiang Ren, and Salman Avestimehr. "FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks." In Findings of the Association for Computational Linguistics: NAACL 2022. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.findings-naacl.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nandra, Constantin I., and Dorian Gorgan. "Workflow Description Language for defining Big Earth Data processing tasks." In 2015 IEEE International Conference on Intelligent Computer Communication and Processing (ICCP). IEEE, 2015. http://dx.doi.org/10.1109/iccp.2015.7312703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sharma, Himanshu. "Improving Natural Language Processing tasks by Using Machine Learning Techniques." In 2021 5th International Conference on Information Systems and Computer Networks (ISCON). IEEE, 2021. http://dx.doi.org/10.1109/iscon52037.2021.9702447.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Language processing tasks"

1

Zelenskyi, Arkadii A. Relevance of research of programs for semantic analysis of texts and review of methods of their realization. [б. в.], December 2018. http://dx.doi.org/10.31812/123456789/2884.

Full text
Abstract:
One of the main tasks of applied linguistics is the solution of the problem of high-quality automated processing of natural language. The most popular methods for processing natural-language text responses for the purpose of extraction and representation of semantics should be systems that are based on the efficient combination of linguistic analysis technologies and analysis methods. Among the existing methods for analyzing text data, a valid method is used by the method using a vector model. Another effective and relevant means of extracting semantics from the text and its representation is the method of latent semantic analysis (LSA). The LSA method was tested and confirmed its effectiveness in such areas of processing the native language as modeling the conceptual knowledge of the person; information search, the implementation of which LSA shows much better results than conventional vector methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography