Journal articles on the topic 'Visual Linguistic Task'

To see the other types of publications on this topic, follow the link: Visual Linguistic Task.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual Linguistic Task.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fox, Sonya, and Beryl Exley. "Historical Timelines Analyzing Mutlimodal Text Design." Social Studies Research and Practice 4, no. 3 (November 1, 2009): 17–27. http://dx.doi.org/10.1108/ssrp-03-2009-b0002.

Full text
Abstract:
The recent focus on literacy in Social Studies has been on linguistic design, particularly that related to the grammar of written and spoken text. When students are expected to produce complex hybridized genres such as timelines, a focus on the teaching and learning of linguistic design is necessary but not sufficient to complete the task. Theorizations of new literacies identify five interrelated meaning making designs for text deconstruction and reproduction: linguistic, spatial, visual, gestural, and audio design. Honing in on the complexity of timelines, this paper casts a lens on the linguistic, visual, spatial, and gestural designs of three pairs of primary school aged Social Studies learners. Drawing on a functional metalanguage, we analyze the linguistic, visual, spatial, and gestural designs of their work. We also offer suggestions of their effect, and from there consider the importance of explicit instruction in text design choices for this Social Studies task. We conclude the analysis by suggesting the foci of explicit instruction for future lessons.
APA, Harvard, Vancouver, ISO, and other styles
2

Suhr, Alane, Mike Lewis, James Yeh, and Yoav Artzi. "Evaluating Visual Reasoning through Grounded Language Understanding." AI Magazine 39, no. 2 (July 1, 2018): 45–52. http://dx.doi.org/10.1609/aimag.v39i2.2796.

Full text
Abstract:
Autonomous systems that understand natural language must reason about complex language and visual observations. Key to making progress towards such systems is the availability of benchmark datasets and tasks. We introduce the Cornell Natural Language Visual Reasoning (NLVR) corpus, which targets reasoning skills like counting, comparisons, and set theory. NLVR contains 92,244 examples of natural language statements paired with synthetic images and annotated with boolean values for the simple task of determining whether the sentence is true or false about the image. While it presents a simple task, NLVR has been developed to challenge systems with diverse linguistic phenomena and complex reasoning. Linguistic analysis confirms that NLVR presents diversity and complexity beyond what is provided by contemporary benchmarks. Empirical evaluation of several methods further demonstrates the open challenges NLVR presents.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Huafeng, Mengwei Tu, and Meiqiong Liang. "Effects of Perceptual Learning Styles on Chinese EFL Learners’ Writing Proficiency in the Reading-writing Integrated Continuation Task." International Journal of Linguistics 14, no. 6 (December 4, 2022): 77. http://dx.doi.org/10.5296/ijl.v14i6.20521.

Full text
Abstract:
Previous studies have manifested that the reading-writing integrated continuation task has great language learning potential and linguistic alignment facilitated by the continuation task positively affects L2 learners’ written performance. As an individual difference construct, perceptual learning style has been investigated from its impact on EFL learning, while research on how it affects learners’ performance in the continuation task seems deficient. To this end, this study investigated the relationship between Chinese EFL learners’ perceptual learning style and writing proficiency in the reading-writing integrated continuation task. Participants were 46 intermediate learners of L2 English from two intact classes who were required to perform both independent topic writing and the continuation task. The results showed that 1) group and auditory style learners slightly outperformed on phrasal alignment while visual and tactile performed better on clausal alignment; 2) visual, tactile and auditory learners were likely to generate content-rich, well-organized and more accurate written production, but students’ linguistic fluency in topic writing outperformed that in the continuation task; 3) learners who prefer audio input showed in inferiority on the continuation writing. These findings confirm that perceptual learning style might be a mediator affecting learners’ linguistic alignment within the continuation task.
APA, Harvard, Vancouver, ISO, and other styles
4

Lu, Youtao, and James L. Morgan. "Homophone auditory processing in cross-linguistic perspective." Proceedings of the Linguistic Society of America 5, no. 1 (March 23, 2020): 529. http://dx.doi.org/10.3765/plsa.v5i1.4733.

Full text
Abstract:
Previous studies reported conflicting results for the effects of homophony on visual word processing across languages. On finding significant differences in homophone density in Japanese, Mandarin Chinese and English, we conducted two experiments to compare native speakers’ competence in homophone auditory processing across these three languages. A lexical decision task showed that the effect of homophony on word processing in Japanese was significantly less detrimental than in Mandarin and English. A word-learning task showed that native Japanese speakers were the fastest in learning novel homophones. These results suggest that language-intrinsic properties influence corresponding language processing abilities of native speakers.
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Chih-Chun, Wan-Cyuan Fan, Cheng-Fu Yang, and Yu-Chiang Frank Wang. "Cross-Modal Mutual Learning for Audio-Visual Speech Recognition and Manipulation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3036–44. http://dx.doi.org/10.1609/aaai.v36i3.20210.

Full text
Abstract:
As a key characteristic in audio-visual speech recognition (AVSR), relating linguistic information observed across visual and audio data has been a challenge, benefiting not only audio/visual speech recognition (ASR/VSR) but also for manipulating data within/across modalities. In this paper, we present a feature disentanglement-based framework for jointly addressing the above tasks. By advancing cross-modal mutual learning strategies, our model is able to convert visual or audio-based linguistic features into modality-agnostic representations. Such derived linguistic representations not only allow one to perform ASR, VSR, and AVSR, but also to manipulate audio and visual data output based on the desirable subject identity and linguistic content information. We perform extensive experiments on different recognition and synthesis tasks to show that our model performs favorably against state-of-the-art approaches on each individual task, while ours is a unified solution that is able to jointly tackle the aforementioned audio-visual learning tasks.
APA, Harvard, Vancouver, ISO, and other styles
6

Will, Udo, Guido Nottbusch, and Rüdiger Weingarten. "Linguistic units in word typing." Written Language and Literacy 9, no. 1 (July 20, 2006): 153–76. http://dx.doi.org/10.1075/wll.9.1.10wil.

Full text
Abstract:
This study reports on two experiments in which German participants had to type words presented to them in various modes. Experiment 1 compares typing following visual and oral word presentation with typing following picture presentation. In the second experiment typing responses following oral and visual word presentation were delayed by an extended preparatory period. Both experiments demonstrate significantly increased inter-keystroke intervals (IKIs) at exclusive syllable (S) boundaries and combined syllable and morpheme (SM) boundaries in comparison to within-syllable (L) boundaries. SM-IKIs are significantly larger than S-IKIs and influenced by word frequencies, indicating lexical dependencies. SM-IKIs were found to be significantly longer for oral than for visual word presentation. This is taken as an indication that additional processes are involved in the accessing of graphemic word forms when words are presented orally. Two effects of the typing delay were identified: a decrease of word initial latencies and the disappearance of size differences between SM-IKIs following visual and oral word presentation. On the other hand, the persistence of augmented SM- and S-IKIs in the delayed typing task indicates that input into the motor system is constituted by sub-word units instead by fully specified words. As SM- and S-IKIs reflect influences of different hierarchical levels of language processing, these findings suggest a processing architecture in which the peripheral motor system essentially connects at several hierarchical levels with central processing units.
APA, Harvard, Vancouver, ISO, and other styles
7

Champoux-Larsson, Marie-France, Alexandra S. Dylman, Helena Örnkloo, and Francisco Esteves. "Identification of facial expressions of emotion by 4-year-old children from different linguistic environments." International Journal of Bilingualism 23, no. 5 (June 13, 2018): 1208–19. http://dx.doi.org/10.1177/1367006918781069.

Full text
Abstract:
The current study investigated the identification of facial expressions of emotion, a socio-emotional task that has not previously been examined in children from different linguistic environments. Eighty-four 4-year-olds growing up in one of three linguistic environments (monolingual, dominant bilingual, balanced bilingual) performed a task where they identified facial expressions (happiness, anger, sadness, fear). Accuracy was analysed with a mixed-design analysis of variance using group (monolinguals, dominant bilinguals and balanced bilinguals) and emotion (happy, angry, sad and scared) as between- and within-group variables, respectively. Our results showed a main effect of emotion, but there was no main effect of group. This suggests that 4-year-olds’ linguistic environment does not affect performance on an identification of facial expressions task. This study was the first to investigate the identification of facial expressions of emotion in children coming from different linguistic environments. As the socio-emotional development of bilinguals is not yet well understood, especially regarding the visual perception of emotions, this study is amongst the first to contribute to this area of research. Our results are therefore of significance as a building block for additional studies that should explore the visual perception of emotions in other types of tasks and populations.
APA, Harvard, Vancouver, ISO, and other styles
8

Gross, Stephanie, Brigitte Krenn, and Matthias Scheutz. "Multi-modal referring expressions in human-human task descriptions and their implications for human-robot interaction." Interaction Studies 17, no. 2 (December 14, 2016): 180–210. http://dx.doi.org/10.1075/is.17.2.02gro.

Full text
Abstract:
Abstract Human instructors often refer to objects and actions involved in a task description using both linguistic and non-linguistic means of communication. Hence, for robots to engage in natural human-robot interactions, we need to better understand the various relevant aspects of human multi-modal task descriptions. We analyse reference resolution to objects in a data collection comprising two object manipulation tasks (22 teacher student interactions in Task 1 and 16 in Task 2) and find that 78.76% of all referring expressions to the objects relevant in Task 1 are verbally underspecified and 88.64% of all referring expressions are verbally underspecified in Task 2. The data strongly suggests that a language processing module for robots must be genuinely multi-modal, allowing for seamless integration of information transmitted in the verbal and the visual channel, whereby tracking the speaker’s eye gaze and gestures as well as object recognition are necessary preconditions.
APA, Harvard, Vancouver, ISO, and other styles
9

Esaulova, Yulia, Sarah Dolscheid, Sabine Reuters, and Martina Penke. "The Alignment of Agent-First Preferences with Visual Event Representations: Contrasting German and Arabic." Journal of Psycholinguistic Research 50, no. 4 (March 11, 2021): 843–61. http://dx.doi.org/10.1007/s10936-020-09750-3.

Full text
Abstract:
AbstractHow does non-linguistic, visual experience affect language production? A series of experiments addressed this question by examining linguistic and visual preferences for agent positions in transitive action scenarios. In Experiment 1, 30 native German speakers described event scenes where agents were positioned either to the right or to the left of patients. Produced utterances had longer speech onset times for scenes with right- rather than left-positioned agents, suggesting that the visual organization of events can affect sentence production. In Experiment 2 another cohort of 36 native German participants indicated their aesthetic preference for left- or right-positioned agents in mirrored scenes and displayed a preference for scenes with left-positioned agents. In Experiment 3, 37 Arabic native participants performed the same non-verbal task showing the reverse preference. Our findings demonstrate that non-linguistic visual preferences seem to affect sentence production, which in turn may rely on the writing system of a specific language.
APA, Harvard, Vancouver, ISO, and other styles
10

Carpio, Claudio Antonio, Diana Valeria Barrios, María Guadalupe Montes, Francisco Aguilar, Daniel García-Gallardo, and Virginia Pacheco. "Linguistic Mediation of Perceptual Adjustment in University Students." Revista Argentina de Ciencias del Comportamiento 13, no. 3 (December 23, 2021): 59–69. http://dx.doi.org/10.32348/1852.4206.v13.n3.27985.

Full text
Abstract:
Students from different areas of academic training (Psychology vs. Optometry) completed a task in which they had to locate a "lost moving target" in a simulated forest on a computer screen. The effects of three independent variables were assessed: a) the type of trajectory of the moving target (regular and irregular), b) the time elapsed since the loss of visual contact with the moving target (delays of 1, 4 and 6 seconds), and c) administration / non administration of verbal consequences for localization responses. Results indicated that accuracy in localization responses was higher on 1) regular trajectories, 2) shortest delays, 3) verbal consequences condition, and 4) Optometry students. Findings are discussed in terms of the parameters of the task. Contributions of the academic training of the participants are discussed as a linguistic scenario in which differential modes of the contact with the environment’s mediation are learned.
APA, Harvard, Vancouver, ISO, and other styles
11

Ruz, María, and Anna C. Nobre. "Attention Modulates Initial Stages of Visual Word Processing." Journal of Cognitive Neuroscience 20, no. 9 (September 2008): 1727–36. http://dx.doi.org/10.1162/jocn.2008.20119.

Full text
Abstract:
Selective attention has the potential to enhance the initial processing of objects, their spatial locations, or their constituent features. The present study shows that this capacity to modulate initial stages of processing also applies to linguistic attributes. A cueing paradigm focused attention at different levels of word representations on a trial-by-trial basis to study the time course of attentional modulation on visual word processing by means of a high-density electrophysiology recording system. Attention to different linguistic attributes modulated components related to semantic, phonological, and orthographic stages of word processing. Crucially, the N200, associated with initial stages of orthographic decoding, was enhanced by attention to the letter pattern of words. These results suggest that top-down attention has the capacity to enhance initial perceptual stages of visual word processing and support the flexibility of attention in modulating different levels of information processing depending on task goals.
APA, Harvard, Vancouver, ISO, and other styles
12

Giuliano, Ryan J., Christina M. Karns, Helen J. Neville, and Steven A. Hillyard. "Early Auditory Evoked Potential Is Modulated by Selective Attention and Related to Individual Differences in Visual Working Memory Capacity." Journal of Cognitive Neuroscience 26, no. 12 (December 2014): 2682–90. http://dx.doi.org/10.1162/jocn_a_00684.

Full text
Abstract:
A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual's capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70–90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Gen, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. "Unicoder-VL: A Universal Encoder for Vision and Language by Cross-Modal Pre-Training." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11336–44. http://dx.doi.org/10.1609/aaai.v34i07.6795.

Full text
Abstract:
We propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner. Borrow ideas from cross-lingual pre-trained models, such as XLM (Lample and Conneau 2019) and Unicoder (Huang et al. 2019), both visual and linguistic contents are fed into a multi-layer Transformer (Vaswani et al. 2017) for the cross-modal pre-training, where three pre-trained tasks are employed, including Masked Language Modeling(MLM), Masked Object Classification(MOC) and Visual-linguistic Matching(VLM). The first two tasks learn context-aware representations for input tokens based on linguistic and visual contents jointly. The last task tries to predict whether an image and a text describe each other. After pretraining on large-scale image-caption pairs, we transfer Unicoder-VL to caption-based image-text retrieval and visual commonsense reasoning, with just one additional output layer. We achieve state-of-the-art or comparable results on both two tasks and show the powerful ability of the cross-modal pre-training.
APA, Harvard, Vancouver, ISO, and other styles
14

Theron, Roberto, and Laura Fontanillo. "Diachronic-information visualization in historical dictionaries." Information Visualization 14, no. 2 (July 12, 2013): 111–36. http://dx.doi.org/10.1177/1473871613495844.

Full text
Abstract:
The field of computational linguistics has been dealing with the modeling of natural language from a computational perspective since the 1950s. However, the usage of advanced and interactive visualization techniques is very limited. This is especially the case of diachronic linguistics, which is devoted to the study of language change. This work is part of a project that aims to provide novel highly interactive visual solutions to ease the task of lexicographers, ensuring a rigorous treatment of the vocabulary beyond the use of traditional lexicographic sources and overcoming certain limitations of corpus linguistics. This article focuses on the choices made for the design and development of an interactive visual tool that supports different tasks related to the processes of drawing up and consulting historical dictionaries. Particularly, we describe solutions for the exploitation, analysis, and expert-directed validation of the data compiled in the available dictionaries in a manner that is both automatic (provided by computational methods) and intelligent (provided by the experts). We thus describe diachronlex diagrams, an interactive visual solution that facilitates linguistic work related to the understanding of the temporal evolution and the lexical relationships between the different meanings registered in subsequent editions of a dictionary.
APA, Harvard, Vancouver, ISO, and other styles
15

Paffen, Chris L. E., Andre Sahakian, Marijn E. Struiksma, and Stefan Van der Stigchel. "Unpredictive linguistic verbal cues accelerate congruent visual targets into awareness in a breaking continuous flash suppression paradigm." Attention, Perception, & Psychophysics 83, no. 5 (March 30, 2021): 2102–12. http://dx.doi.org/10.3758/s13414-021-02297-y.

Full text
Abstract:
AbstractOne of the most influential ideas within the domain of cognition is that of embodied cognition, in which the experienced world is the result of an interplay between an organism’s physiology, sensorimotor system, and its environment. An aspect of this idea is that linguistic information activates sensory representations automatically. For example, hearing the word ‘red’ would automatically activate sensory representations of this color. But does linguistic information prioritize access to awareness of congruent visual information? Here, we show that linguistic verbal cues accelerate matching visual targets into awareness by using a breaking continuous flash suppression paradigm. In a speeded reaction time task, observers heard spoken color labels (e.g., red) followed by colored targets that were either congruent (red), incongruent (green), or neutral (a neutral noncolor word) with respect to the labels. Importantly, and in contrast to previous studies investigating a similar question, the incidence of congruent trials was not higher than that of incongruent trials. Our results show that RTs were selectively shortened for congruent verbal–visual pairings, and that this shortening occurred over a wide range of cue–target intervals. We suggest that linguistic verbal information preactivates sensory representations, so that hearing the word ‘red’ preactivates (visual) sensory information internally.
APA, Harvard, Vancouver, ISO, and other styles
16

Kádár, Ákos, Grzegorz Chrupała, and Afra Alishahi. "Representation of Linguistic Form and Function in Recurrent Neural Networks." Computational Linguistics 43, no. 4 (December 2017): 761–80. http://dx.doi.org/10.1162/coli_a_00300.

Full text
Abstract:
We present novel methods for analyzing the activation patterns of recurrent neural networks from a linguistic point of view and explore the types of linguistic structure they learn. As a case study, we use a standard standalone language model, and a multi-task gated recurrent network architecture consisting of two parallel pathways with shared word embeddings: The Visual pathway is trained on predicting the representations of the visual scene corresponding to an input sentence, and the Textual pathway is trained to predict the next word in the same sentence. We propose a method for estimating the amount of contribution of individual tokens in the input to the final prediction of the networks. Using this method, we show that the Visual pathway pays selective attention to lexical categories and grammatical functions that carry semantic information, and learns to treat word types differently depending on their grammatical function and their position in the sequential structure of the sentence. In contrast, the language models are comparatively more sensitive to words with a syntactic function. Further analysis of the most informative n-gram contexts for each model shows that in comparison with the Visual pathway, the language models react more strongly to abstract contexts that represent syntactic constructions.
APA, Harvard, Vancouver, ISO, and other styles
17

Prasad, Seema, Shiji Viswambharan, and Ramesh Mishra. "Visual working memory load constrains language non-selective activation under task-demands." Linguistic Approaches to Bilingualism 10, no. 6 (June 3, 2019): 805–46. http://dx.doi.org/10.1075/lab.18045.pra.

Full text
Abstract:
Abstract Visual world studies with bilinguals have demonstrated spontaneous cross-linguistic activations. In two experiments, we examined whether concurrent visual working memory (VWM) load constrains bilingual parallel activation during spoken word comprehension. Hindi-English bilinguals heard a spoken word in Hindi (L1) or English (L2) and saw a display containing the spoken word-referent, a phonological cohort of the spoken word’s translation and two unrelated objects. Participants completed a concurrent WM task of remembering an array of five coloured squares and judging its similarity with a test array. Participants were asked to click on the spoken word-referent in Experiment 1 but not in Experiment 2. Reduced parallel activation and enhanced target activation was observed under the load for L2 spoken words in Experiment 1 (where the task-demands were high). The findings suggest that a VWM load can constrain the spontaneous activation of an irrelevant lexicon, under certain conditions.
APA, Harvard, Vancouver, ISO, and other styles
18

Schröter, Pauline, and Sascha Schroeder. "DIFFERENCES IN VISUAL WORD RECOGNITION BETWEEN L1 AND L2 SPEAKERS." Studies in Second Language Acquisition 40, no. 2 (October 17, 2017): 319–39. http://dx.doi.org/10.1017/s0272263117000201.

Full text
Abstract:
AbstractInvestigating the impact of linguistic characteristics on visual word recognition in children, we studied whether differences in native (L1) and second language (L2) processing already emerge at the beginning of reading development. German elementary school students in grades 2 to 6 completed a battery of standardized tests and a lexical decision task (LDT). Though L1 speakers outperformed L2 speakers on German skills, groups did not differ in their overall performance on the LDT. However, results from mixed-effect models revealed greater effects for word frequency and length in L2 over L1 speakers, indicating qualitative differences in the sensitivity to linguistic information between groups. This distinction persisted across all grades and after controlling for differences in vocabulary size and reading fluency. Findings extend evidence provided for adult L2 processing, suggesting that varying language exposure shapes the development of the word-recognition system already in the early stages of reading development.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhou, Qianli, Tianrui Hui, Rong Wang, Haimiao Hu, and Si Liu. "Attentive Excitation and Aggregation for Bilingual Referring Image Segmentation." ACM Transactions on Intelligent Systems and Technology 12, no. 2 (March 2021): 1–17. http://dx.doi.org/10.1145/3446345.

Full text
Abstract:
The goal of referring image segmentation is to identify the object matched with an input natural language expression. Previous methods only support English descriptions, whereas Chinese is also broadly used around the world, which limits the potential application of this task. Therefore, we propose to extend existing datasets with Chinese descriptions and preprocessing tools for training and evaluating bilingual referring segmentation models. In addition, previous methods also lack the ability to collaboratively learn channel-wise and spatial-wise cross-modal attention to well align visual and linguistic modalities. To tackle these limitations, we propose a Linguistic Excitation module to excite image channels guided by language information and a Linguistic Aggregation module to aggregate multimodal information based on image-language relationships. Since different levels of features from the visual backbone encode rich visual information, we also propose a Cross-Level Attentive Fusion module to fuse multilevel features gated by language information. Extensive experiments on four English and Chinese benchmarks show that our bilingual referring image segmentation model outperforms previous methods.
APA, Harvard, Vancouver, ISO, and other styles
20

WU, Shiyu, and Zheng MA. "Cross-linguistic Phonological Interference in L2 Visual Word Reading: Evidence from the Semantic Relatedness Decision Task." Acta Psychologica Sinica 47, no. 11 (2015): 1318. http://dx.doi.org/10.3724/sp.j.1041.2015.01318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Burton, H., J. B. Diamond, and K. B. McDermott. "Dissociating Cortical Regions Activated by Semantic and Phonological Tasks: A fMRI Study in Blind and Sighted People." Journal of Neurophysiology 90, no. 3 (September 2003): 1965–82. http://dx.doi.org/10.1152/jn.00279.2003.

Full text
Abstract:
Previous neuroimaging studies of language processing in blind individuals described cortical activation of primary (V1) and higher tier visual areas, irrespective of the age of blindness onset. Specifically, participants were given nouns and asked to generate an associated verb. These results confirmed the presence of adaptations in the visual cortex of blind people and suggested that these responses represented linguistic operations. The present functional magnetic resonance imaging study attempted to further characterize these responses as being preferential for semantic or phonological processing. Three groups of participants (sighted, earlyonset, and late-onset blind) heard lists of related words and attended to either a common meaning (semantic task) or common rhyme (phonological task) that linked the words. In all three groups, the semantic task elicited stronger activity in the left anterior inferior frontal gyrus and the phonological task evoked stronger activity bilaterally in the inferior parietal cortex and posterior aspects of the left inferior frontal gyrus. Only blind individuals showed activity in occipital, temporal, and parietal components of visual cortex. The spatial extent of visual cortex activity was greatest in early blind, who exhibited activation in all ventral and dorsal visual cortex subdivisions (V1 through MT) for both tasks. Preferential activation appeared for the semantic task. Late blind individuals exhibited responses in ventral and dorsal V1, ventral V2, VP and V8, but only for the semantic task. Our findings support prior evidence of visual cortex activity in blind people engaged in auditory language processing and suggest that this activity may be related to semantic processing.
APA, Harvard, Vancouver, ISO, and other styles
22

Yan, Jing, and Stephen Matthews. "Relative clauses in English-Mandarin bilingual children." Chinese Language and Discourse 8, no. 1 (September 21, 2017): 1–17. http://dx.doi.org/10.1075/cld.8.1.01yan.

Full text
Abstract:
Abstract The role of cross-linguistic influence in bilingual children’s development remains a matter of debate. Some researchers have proposed that simultaneous bilingual learners develop the linguistic systems of two languages in the same way as matched monolingual children do. Other researchers have argued that bilingual children show different developmental pathways. This study investigates cross-linguistic influence in the acquisition of relative clauses by English-Mandarin bilingual children in Singapore. The elicitation task included narration and interview tasks. Thirty-six primary school students aged 6 to 11 years completed the task in both English and Mandarin. The results reveal that the number of relative clauses increased with age in both languages. Participants had a preference for subject relatives over object relatives. The most frequent error type in Mandarin involves postnominal relative clauses, which have not been reported in monolingual children in the literature and thus can be treated as evidence of transfer from English. The findings of this study provide evidence for cross-linguistic influence in bilingual children’s speech.
APA, Harvard, Vancouver, ISO, and other styles
23

Mas-Herrero, Ernest, Daniel Adrover-Roig, María Ruz, and Ruth de Diego-Balaguer. "Do Bilinguals Outperform Monolinguals in Switching Tasks? Contrary Evidence for Nonlinguistic and Linguistic Switching Tasks." Neurobiology of Language 2, no. 4 (2021): 586–604. http://dx.doi.org/10.1162/nol_a_00059.

Full text
Abstract:
Abstract The benefits of bilingualism in executive functions are highly debated. Even so, in switching tasks, these effects seem robust, although smaller than initially thought (Gunnerud et al., 2020; Ware et al., 2020). By handling two languages throughout their lifespan, bilinguals appear to train their executive functions and show benefits in nonlinguistic switching tasks compared to monolinguals. Nevertheless, because bilinguals need to control for the interference of another language, they may show a disadvantage when dealing with task-switching paradigms requiring language control, particularly when those are performed in their less dominant language. The present work explored this issue by studying bilingualism’s effects on task switching within the visual and language domains. On the one hand, our results show that bilinguals were overall faster and presented reduced switch costs compared to monolinguals when performing perceptual geometric judgments with no time for task preparation. On the other hand, no bilingual advantage was found when a new sample of comparable bilinguals and monolinguals completed a within-language switching task. Our results provide clear evidence favoring the bilingual advantage, yet only when the task imposes greater executive demands and does not involve language control.
APA, Harvard, Vancouver, ISO, and other styles
24

Strijkers, Kristof, Daisy Bertrand, and Jonathan Grainger. "Seeing the Same Words Differently: The Time Course of Automaticity and Top–Down Intention in Reading." Journal of Cognitive Neuroscience 27, no. 8 (August 2015): 1542–51. http://dx.doi.org/10.1162/jocn_a_00797.

Full text
Abstract:
We investigated how linguistic intention affects the time course of visual word recognition by comparing the brain's electrophysiological response to a word's lexical frequency, a well-established psycholinguistic marker of lexical access, when participants actively retrieve the meaning of the written input (semantic categorization) versus a situation where no language processing is necessary (ink color categorization). In the semantic task, the ERPs elicited by high-frequency words started to diverge from those elicited by low-frequency words as early as 120 msec after stimulus onset. On the other hand, when categorizing the colored font of the very same words in the color task, word frequency did not modulate ERPs until some 100 msec later (220 msec poststimulus onset) and did so for a shorter period and with a smaller scalp distribution. The results demonstrate that, although written words indeed elicit automatic recognition processes in the brain, the speed and quality of lexical processing critically depends on the top–down intention to engage in a linguistic task.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Larry Hong-lin. "You Speak What They Wear." Asian Journal of Social Science 47, no. 2 (June 7, 2019): 169–203. http://dx.doi.org/10.1163/15685314-04702002.

Full text
Abstract:
Abstract We manipulated persona features characterising US and Taike subcultures, and examined its impact on preference toward Chinese-English alternated uses among Taiwanese youngsters. We conducted (i) a literature survey to identify the features iconic of US and Taike subcultures, (ii) a norming task to verify the subculture icons obtained, (iii) a multi-choice task to survey preference for six relevant non-/mixed forms of language, (iv) a forced-choice task to inspect “relative” code choices between Chinese versus its code-mix with English elements under the cues of the probe features. We found that visual cues and the stereotypical generalisations thereof play a role in language negotiation in first meeting contexts; cultural personae manifest themselves in the language alternation; with code mixing as an accommodative move, language users self-categorise themselves with the interlocutor that is stereotyped as having a linguistic preference associated to their persona character; linguistic convergence to stereotypes is driven by unconscious need.
APA, Harvard, Vancouver, ISO, and other styles
26

Burton, H., A. Z. Snyder, J. B. Diamond, and M. E. Raichle. "Adaptive Changes in Early and Late Blind: A fMRI Study of Verb Generation to Heard Nouns." Journal of Neurophysiology 88, no. 6 (December 1, 2002): 3359–71. http://dx.doi.org/10.1152/jn.00129.2002.

Full text
Abstract:
Literacy for blind people requires learning Braille. Along with others, we have shown that reading Braille activates visual cortex. This includes striate cortex (V1), i.e., banks of calcarine sulcus, and several higher visual areas in lingual, fusiform, cuneus, lateral occipital, inferior temporal, and middle temporal gyri. The spatial extent and magnitude of magnetic resonance (MR) signals in visual cortex is greatest for those who became blind early in life. Individuals who lost sight as adults, and subsequently learned Braille, still exhibited activity in some of the same visual cortex regions, especially V1. These findings suggest these visual cortex regions become adapted to processing tactile information and that this cross-modal neural change might support Braille literacy. Here we tested the alternative hypothesis that these regions directly respond to linguistic aspects of a task. Accordingly, language task performance by blind persons should activate the same visual cortex regions regardless of input modality. Specifically, visual cortex activity in blind people ought to arise during a language task involving heard words. Eight early blind, six late blind, and eight sighted subjects were studied using functional magnetic resonance imaging (fMRI) during covert generation of verbs to heard nouns. The control task was passive listening to indecipherable sounds (reverse words) matched to the nouns in sound intensity, duration, and spectral content. Functional responses were analyzed at the level of individual subjects using methods based on the general linear model and at the group level, using voxel based ANOVA and t-test analyses. Blind and sighted subjects showed comparable activation of language areas in left inferior frontal, dorsolateral prefrontal, and left posterior superior temporal gyri. The main distinction was bilateral, left dominant activation of the same visual cortex regions previously noted with Braille reading in all blind subjects. The spatial extent and magnitude of responses was greatest on the left in early blind individuals. Responses in the late blind group mostly were confined to V1 and nearby portions of the lingual and fusiform gyri. These results confirm the presence of adaptations in visual cortex of blind people but argue against the notion that this activity during Braille reading represents somatosensory (haptic) processing. Rather, we suggest that these responses can be most parsimoniously explained in terms of linguistic operations. It remains possible that these responses represent adaptations which initially are for processing either sound or touch, but which are later generalized to the other modality during acquisition of Braille reading skills.
APA, Harvard, Vancouver, ISO, and other styles
27

Serratrice, Ludovica, and Cécile De Cat. "Individual differences in the production of referential expressions: The effect of language proficiency, language exposure and executive function in bilingual and monolingual children." Bilingualism: Language and Cognition 23, no. 2 (April 22, 2019): 371–86. http://dx.doi.org/10.1017/s1366728918000962.

Full text
Abstract:
AbstractOne hundred and seventy-two English-speaking 5- to 7-year-olds participated in a referential communication task where we manipulated the linguistic mention and the visual presence of a competitor alongside a target referent. Eighty-seven of the children were additionally exposed to a language other than English (bilinguals). We measured children's language proficiency, verbal working memory (WM), cognitive control skills, family SES, and relative amount of cumulative exposure and use of the home language for the bilinguals. Children's use of full Noun Phrases (NPs) to identify a target referent was predicted by the visual presence of a competitor more than by its linguistic mention. Verbal WM and proficiency predicted NP use, while cognitive control skills predicted both the ability to use expressions signalling discourse integration and sensitivity to the presence of a discourse competitor, but not of a visual competitor. Bilingual children were as informative as monolingual children once proficiency was controlled for.
APA, Harvard, Vancouver, ISO, and other styles
28

Pals, Carina, Anastasios Sarampalis, and Deniz Başkent. "Listening Effort With Cochlear Implant Simulations." Journal of Speech, Language, and Hearing Research 56, no. 4 (August 2013): 1075–84. http://dx.doi.org/10.1044/1092-4388(2012/12-0074).

Full text
Abstract:
Purpose Fitting a cochlear implant (CI) for optimal speech perception does not necessarily optimize listening effort. This study aimed to show that listening effort may change between CI processing conditions for which speech intelligibility remains constant. Method Nineteen normal-hearing participants listened to CI simulations with varying numbers of spectral channels. A dual-task paradigm combining an intelligibility task with either a linguistic or nonlinguistic visual response-time (RT) task measured intelligibility and listening effort. The simultaneously performed tasks compete for limited cognitive resources; changes in effort associated with the intelligibility task are reflected in changes in RT on the visual task. A separate self-report scale provided a subjective measure of listening effort. Results All measures showed significant improvements with increasing spectral resolution up to 6 channels. However, only the RT measure of listening effort continued improving up to 8 channels. The effects were stronger for RTs recorded during listening than for RTs recorded between listening. Conclusion The results suggest that listening effort decreases with increased spectral resolution. Moreover, these improvements are best reflected in objective measures of listening effort, such as RTs on a secondary task, rather than intelligibility scores or subjective effort measures.
APA, Harvard, Vancouver, ISO, and other styles
29

Lin, Yi, Hongwei Ding, and Yang Zhang. "Prosody Dominates Over Semantics in Emotion Word Processing: Evidence From Cross-Channel and Cross-Modal Stroop Effects." Journal of Speech, Language, and Hearing Research 63, no. 3 (March 23, 2020): 896–912. http://dx.doi.org/10.1044/2020_jslhr-19-00258.

Full text
Abstract:
Purpose Emotional speech communication involves multisensory integration of linguistic (e.g., semantic content) and paralinguistic (e.g., prosody and facial expressions) messages. Previous studies on linguistic versus paralinguistic salience effects in emotional speech processing have produced inconsistent findings. In this study, we investigated the relative perceptual saliency of emotion cues in cross-channel auditory alone task (i.e., semantics–prosody Stroop task) and cross-modal audiovisual task (i.e., semantics–prosody–face Stroop task). Method Thirty normal Chinese adults participated in two Stroop experiments with spoken emotion adjectives in Mandarin Chinese. Experiment 1 manipulated auditory pairing of emotional prosody (happy or sad) and lexical semantic content in congruent and incongruent conditions. Experiment 2 extended the protocol to cross-modal integration by introducing visual facial expression during auditory stimulus presentation. Participants were asked to judge emotional information for each test trial according to the instruction of selective attention. Results Accuracy and reaction time data indicated that, despite an increase in cognitive demand and task complexity in Experiment 2, prosody was consistently more salient than semantic content for emotion word processing and did not take precedence over facial expression. While congruent stimuli enhanced performance in both experiments, the facilitatory effect was smaller in Experiment 2. Conclusion Together, the results demonstrate the salient role of paralinguistic prosodic cues in emotion word processing and congruence facilitation effect in multisensory integration. Our study contributes tonal language data on how linguistic and paralinguistic messages converge in multisensory speech processing and lays a foundation for further exploring the brain mechanisms of cross-channel/modal emotion integration with potential clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
30

McClung, Sarah N., and Ziho Kang. "Characterization of Visual Scanning Patterns in Air Traffic Control." Computational Intelligence and Neuroscience 2016 (2016): 1–17. http://dx.doi.org/10.1155/2016/8343842.

Full text
Abstract:
Characterization of air traffic controllers’ (ATCs’) visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs’ linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.
APA, Harvard, Vancouver, ISO, and other styles
31

Oosthuizen, Ilze, Erin M. Picou, Lidia Pottas, Hermanus Carel Myburgh, and De Wet Swanepoel. "Listening Effort in Native and Nonnative English-Speaking Children Using Low Linguistic Single- and Dual-Task Paradigms." Journal of Speech, Language, and Hearing Research 63, no. 6 (June 22, 2020): 1979–89. http://dx.doi.org/10.1044/2020_jslhr-19-00330.

Full text
Abstract:
Purpose It is not clear if behavioral indices of listening effort are sensitive to changes in signal-to-noise ratio (SNR) for young children (7–12 years old) from multilingual backgrounds. The purpose of this study was to explore the effects of SNR on listening effort in multilingual school-aged children (native English, nonnative English) as measured with a single- and a dual-task paradigm with low-linguistic speech stimuli (digits). The study also aimed to explore age effects on digit triplet recognition and response times (RTs). Method Sixty children with normal hearing participated, 30 per language group. Participants completed single and dual tasks in three SNRs (quiet, −10 dB, and −15 dB). Speech stimuli for both tasks were digit triplets. Verbal RTs were the listening effort measure during the single-task paradigm. A visual monitoring task was the secondary task during the dual-task paradigm. Results Significant effects of SNR on RTs were evident during both single- and dual-task paradigms. As expected, language background did not affect the pattern of RTs. The data also demonstrate a maturation effect for triplet recognition during both tasks and for RTs during the dual-task only. Conclusions Both single- and dual-task paradigms were sensitive to changes in SNR for school-aged children between 7 and 12 years of age. Language background (English as native language vs. English as nonnative language) had no significant effect on triplet recognition or RTs, demonstrating practical utility of low-linguistic stimuli for testing children from multilingual backgrounds.
APA, Harvard, Vancouver, ISO, and other styles
32

Russo, Sofia, Giulia Calignano, Marco Dispaldro, and Eloisa Valenza. "An Integrated Perspective on Spatio-Temporal Attention and Infant Language Acquisition." International Journal of Environmental Research and Public Health 18, no. 4 (February 8, 2021): 1592. http://dx.doi.org/10.3390/ijerph18041592.

Full text
Abstract:
Efficiency in the early ability to switch attention toward competing visual stimuli (spatial attention) may be linked to future ability to detect rapid acoustic changes in linguistic stimuli (temporal attention). To test this hypothesis, we compared individual performances in the same cohort of Italian-learning infants in two separate tasks: (i) an overlap task, measuring disengagement efficiency for visual stimuli at 4 months (Experiment 1), and (ii) an auditory discrimination task for trochaic syllabic sequences at 7 months (Experiment 2). Our results indicate that an infant’s efficiency in processing competing information in the visual field (i.e., visuospatial attention; Exp. 1) correlates with the subsequent ability to orient temporal attention toward relevant acoustic changes in the speech signal (i.e., temporal attention; Exp. 2). These results point out the involvement of domain-general attentional processes (not specific to language or the sensorial domain) playing a pivotal role in the development of early language skills in infancy.
APA, Harvard, Vancouver, ISO, and other styles
33

Secora, Kristen, and Karen Emmorey. "Visual-Spatial Perspective-Taking in Spatial Scenes and in American Sign Language." Journal of Deaf Studies and Deaf Education 25, no. 4 (June 1, 2020): 447–56. http://dx.doi.org/10.1093/deafed/enaa006.

Full text
Abstract:
Abstract As spatial languages, sign languages rely on spatial cognitive processes that are not involved for spoken languages. Interlocutors have different visual perspectives of the signer’s hands requiring a mental transformation for successful communication about spatial scenes. It is unknown whether visual-spatial perspective-taking (VSPT) or mental rotation (MR) abilities support signers’ comprehension of perspective-dependent American Sign Language (ASL) structures. A total of 33 deaf ASL adult signers completed tasks examining nonlinguistic VSPT ability, MR ability, general ASL proficiency (ASL-Sentence Reproduction Task [ASL-SRT]), and an ASL comprehension test involving perspective-dependent classifier constructions (the ASL Spatial Perspective Comprehension Test [ASPCT] test). Scores on the linguistic (ASPCT) and VSPT tasks positively correlated with each other and both correlated with MR ability; however, VSPT abilities predicted linguistic perspective-taking better than did MR ability. ASL-SRT scores correlated with ASPCT accuracy (as both require ASL proficiency) but not with VSPT scores. Therefore, the ability to comprehend perspective-dependent ASL classifier constructions relates to ASL proficiency and to nonlinguistic VSPT and MR abilities.
APA, Harvard, Vancouver, ISO, and other styles
34

Kapiley, Keerthana, and Ramesh Kumar Mishra. "Iconic culture-specific images influence language non-selective translation activation in bilinguals." Translation, Cognition & Behavior 1, no. 2 (September 27, 2018): 221–50. http://dx.doi.org/10.1075/tcb.00010.kap.

Full text
Abstract:
Abstract Two experiments using the visual-world paradigm examined whether culture-specific images influence the activation of translation equivalents during spoken-word recognition in bilinguals. In Experiment 1, the participants performed a visual-world task during which they were asked to click on the target after the spoken word (L1 or L2). In Experiment 2, the participants were presented with culture-specific images (faces representing L1, L2 and Neutral) during the visual world task. Time-course analysis of Experiment 1 revealed that there were a significantly higher number of looks to TE-cohort member compared to distractors only when participants heard to L2 words. In Experiment 2, when the cultural-specific images were congruent with the spoken word’s language, participants deployed higher number of looks to TE-cohort member compared to distractors. This effect was seen in both the language directions but not when the culture-specific images were incongruent with the spoken word. The eyetracking data suggest that culture-specific images influence cross-linguistic activation of semantics during bilingual audio-visual language processing.
APA, Harvard, Vancouver, ISO, and other styles
35

Wallmark, Zachary, Linh Nghiem, and Lawrence E. Marks. "Does Timbre Modulate Visual Perception? Exploring Crossmodal Interactions." Music Perception 39, no. 1 (September 1, 2021): 1–20. http://dx.doi.org/10.1525/mp.2021.39.1.1.

Full text
Abstract:
Musical timbre is often described using terms from non-auditory senses, mainly vision and touch; but it is not clear whether crossmodality in timbre semantics reflects multisensory processing or simply linguistic convention. If multisensory processing is involved in timbre perception, the mechanism governing the interaction remains unknown. To investigate whether timbres commonly perceived as “bright-dark” facilitate or interfere with visual perception (darkness-brightness), we designed two speeded classification experiments. Participants were presented consecutive images of slightly varying (or the same) brightness along with task-irrelevant auditory primes (“bright” or “dark” tones) and asked to quickly identify whether the second image was brighter/darker than the first. Incongruent prime-stimulus combinations produced significantly more response errors compared to congruent combinations but choice reaction time was unaffected. Furthermore, responses in a deceptive identical-image condition indicated subtle semantically congruent response bias. Additionally, in Experiment 2 (which also incorporated a spatial texture task), measures of reaction time (RT) and accuracy were used to construct speed-accuracy tradeoff functions (SATFs) in order to critically compare two hypothesized mechanisms for timbre-based crossmodal interactions, sensory response change vs. shift in response criterion. Results of the SATF analysis are largely consistent with the response criterion hypothesis, although without conclusively ruling out sensory change.
APA, Harvard, Vancouver, ISO, and other styles
36

Lachter, Joel, and Mary Hayhoe. "Capacity Limitations in Memory for Visual Locations." Perception 24, no. 12 (December 1995): 1427–41. http://dx.doi.org/10.1068/p241427.

Full text
Abstract:
This paper examines people's ability to make judgments which require them to know the relative positions of objects that are not simultaneously visible is examined. It has previously been shown that people can accurately perform such a task. The current experiments test the capacity limits for such tasks. Two experiments were conducted that required subjects to make spatial judgments based on sequences of points presented two at a time. It was shown that, whereas subjects can perform accurately when memory for a small number of dots (about four) is required, increasing the number of dots results in a radical reduction in performance. This argues against both the idea that spatial memory is based on a linguistic description and the idea that it is based on an image-like representation. Rather it appears that one can form an accurate representation of the spatial properties of a small number of objects.
APA, Harvard, Vancouver, ISO, and other styles
37

Barnett, Mary Jane. "Erasmus and the Hermeneutics of Linguistic Praxis." Renaissance Quarterly 49, no. 3 (1996): 542–72. http://dx.doi.org/10.2307/2863366.

Full text
Abstract:
Erasmian hermeneutics are notoriously difficult to describe clearly because Erasmus is always looking in two directions at once — both toward the ideal, perfectly expressive Word and toward the multitude of imperfect, human words caught in the tumult of history and transmission. In the Enchiridion(1503), a relatively early work, he argues that words inevitably fall short of their task of miming the Logos, that the smallness of the manna rained down on theIsraelites in Exodus 16 “signifies the lowliness of speech that conceals immense mysteries in almost crude language.” Erasmus believes in an essential connection of some kind between res and verbum, but it is clear that he holds as well to the Platonic view that this connection is always necessarily inadequate, that there can be an approach but never an arrival at complete meaning through human language.
APA, Harvard, Vancouver, ISO, and other styles
38

Sevastjanova, Rita, Wolfgang Jentner, Fabian Sperrle, Rebecca Kehlbeck, Jürgen Bernard, and Mennatallah El-assady. "QuestionComb: A Gamification Approach for the Visual Explanation of Linguistic Phenomena through Interactive Labeling." ACM Transactions on Interactive Intelligent Systems 11, no. 3-4 (December 31, 2021): 1–38. http://dx.doi.org/10.1145/3429448.

Full text
Abstract:
Linguistic insight in the form of high-level relationships and rules in text builds the basis of our understanding of language. However, the data-driven generation of such structures often lacks labeled resources that can be used as training data for supervised machine learning. The creation of such ground-truth data is a time-consuming process that often requires domain expertise to resolve text ambiguities and characterize linguistic phenomena. Furthermore, the creation and refinement of machine learning models is often challenging for linguists as the models are often complex, in-transparent, and difficult to understand. To tackle these challenges, we present a visual analytics technique for interactive data labeling that applies concepts from gamification and explainable Artificial Intelligence (XAI) to support complex classification tasks. The visual-interactive labeling interface promotes the creation of effective training data. Visual explanations of learned rules unveil the decisions of the machine learning model and support iterative and interactive optimization. The gamification-inspired design guides the user through the labeling process and provides feedback on the model performance. As an instance of the proposed technique, we present QuestionComb , a workspace tailored to the task of question classification (i.e., in information-seeking vs. non-information-seeking questions). Our evaluation studies confirm that gamification concepts are beneficial to engage users through continuous feedback, offering an effective visual analytics technique when combined with active learning and XAI.
APA, Harvard, Vancouver, ISO, and other styles
39

Cuskley, Christine. "Mappings between linguistic sound and motion." Public Journal of Semiotics 5, no. 1 (December 8, 2013): 39–62. http://dx.doi.org/10.37693/pjos.2013.5.9651.

Full text
Abstract:
This paper provides an overview of the possible function of non-arbitrary mappings between linguistic form and meaning, and presents new empirical evidence showing that shared cross-modal associations may underlie motion sound-symbolism in particular. In terms of function, several lines of empirical and theoretical evidence suggest that non-arbitrary form-meaning connections could have played a crucial role in lexical emergence during language evolution. Furthermore, the persistence of such non-arbitrariness in some areas of modern language may also be highly functional, as recent data has shown that non-arbitrary forms may help to bootstrap learning in children (Imai, Kita, Nagumo, and Okada, 2008) and adults (Nielsen and Rendall, 2012). Given the functional role of these non-arbitrary mappings between linguistic form and meaning, this paper describes new experimental data demonstrating shared mappings between non-sense words and visual motion using a direct matching task. Participants were given nonsense words that varied in terms of their voicing, reduplication, and vowel quality, and asked to change the movement of a ball to match a given word. Results show that back vowels are mapped onto slower speeds, and consonant reduplication with vowel alternation is mapped onto faster speeds. These results show a shared cross-modal association between linguistic sound and motion, which is likely leveraged in sound-symbolic systems found in natural language.
APA, Harvard, Vancouver, ISO, and other styles
40

BOLOGNESI, MARIANNA. "Using semantic feature norms to investigate how the visual and verbal modes afford metaphor construction and expression." Language and Cognition 9, no. 3 (October 18, 2016): 525–52. http://dx.doi.org/10.1017/langcog.2016.27.

Full text
Abstract:
abstractIn this study, two modalities of expression (verbal and visual) are compared and contrasted, in relation to their ability and their limitations to construct and express metaphors. A representative set of visual metaphors and a representative set of linguistic metaphors are here compared, and the semantic similarity between metaphor terms is modeled within the two sets. Such similarity is operationalized in terms of semantic features produced by informants in a property generation task (e.g., McRae et al., 2005). Semantic features provide insights into conceptual content, and play a role in deep conceptual processing, as opposed to shallow linguistic processing. Thus, semantic features appear to be useful for modeling metaphor comprehension, assuming that metaphors are matters of thought rather than simple figures of speech (Lakoff & Johnson, 1980). The question tackled in this paper is whether semantic features can account for the similarity between metaphor terms of both visual and verbal metaphors. For this purpose, a database of semantic features was collected and then used to analyze fifty visual metaphors and fifty verbal metaphors. It was found that the number of semantic features shared between metaphor terms is predicted by the modality of expression of the metaphor: the terms compared in visual metaphors share semantic features, while the terms compared in verbal metaphors do not. This suggests that the two modalities of expression afford different ways to construct and express metaphors.
APA, Harvard, Vancouver, ISO, and other styles
41

Zelinsky, Gregory J., and Gregory L. Murphy. "Synchronizing Visual and Language Processing: An Effect of Object Name Length on Eye Movements." Psychological Science 11, no. 2 (March 2000): 125–31. http://dx.doi.org/10.1111/1467-9280.00227.

Full text
Abstract:
Are visual and verbal processing systems functionally independent? Two experiments (one using line drawings of common objects, the other using faces) explored the relationship between the number of syllables in an object's name (one or three) and the visual inspection of that object. The tasks were short-term recognition and visual search. Results indicated more fixations and longer gaze durations on objects having three-syllable names when the task encouraged a verbal encoding of the objects (i.e., recognition). No effects of syllable length on eye movements were found when implicit naming demands were minimal (i.e., visual search). These findings suggest that implicitly naming a pictorial object constrains the oculomotor inspection of that object, and that the visual and verbal encoding of an object are synchronized so that the faster process must wait for the slower to be completed before gaze shifts to another object. Both findings imply a tight coupling between visual and linguistic processing, and highlight the utility of an oculomotor methodology to understand this coupling.
APA, Harvard, Vancouver, ISO, and other styles
42

Bogaerts, Louisa, Noam Siegelman, Tali Ben-Porat, and Ram Frost. "Is the Hebb repetition task a reliable measure of individual differences in sequence learning?" Quarterly Journal of Experimental Psychology 71, no. 4 (January 1, 2018): 892–905. http://dx.doi.org/10.1080/17470218.2017.1307432.

Full text
Abstract:
The Hebb repetition task, an operationalization of long-term sequence learning through repetition, is the focus of renewed interest, as it is taken to provide a laboratory analogue for naturalistic vocabulary acquisition. Indeed, recent studies have consistently related performance in the Hebb repetition task with a range of linguistic (dis)abilities. However, despite the growing interest in the Hebb repetition effect as a theoretical construct, no previous research has ever tested whether the task used to assess Hebb learning offers a stable and reliable measure of individual performance in sequence learning. Since reliability is a necessary condition to predictive validity, in the present work, we tested whether individual ability in visual verbal Hebb repetition learning displays basic test–retest reliability. In a first experiment, Hebrew–English bilinguals performed two verbal Hebb tasks, one with English and one with Hebrew consonant letters. They were retested on the same Hebb tasks after a period of about 6 months. Overall, serial recall performance proved to be a stable and reliable capacity of an individual. By contrast, the test–retest reliability of individual learning performance in our Hebb task was close to zero. A second experiment with French speakers replicated these results and demonstrated that the concurrent learning of two repeated Hebb sequences within the same task minimally improves the reliability scores. Taken together, our results raise concerns regarding the usefulness of at least some current Hebb learning tasks in predicting linguistic (dis)abilities. The theoretical implications are discussed.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Qi, Richard Lewis, Satinder Singh, and Edmund Durfee. "Learning to Communicate and Solve Visual Blocks-World Tasks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5781–88. http://dx.doi.org/10.1609/aaai.v33i01.33015781.

Full text
Abstract:
We study emergent communication between speaker and listener recurrent neural-network agents that are tasked to cooperatively construct a blocks-world target image sampled from a generative grammar of blocks configurations. The speaker receives the target image and learns to emit a sequence of discrete symbols from a fixed vocabulary. The listener learns to construct a blocks-world image by choosing block placement actions as a function of the speaker’s full utterance and the image of the ongoing construction. Our contributions are (a) the introduction of a task domain for studying emergent communication that is both challenging and affords useful analyses of the emergent protocols; (b) an empirical comparison of the interpolation and extrapolation performance of training via supervised, (contextual) Bandit, and reinforcement learning; and (c) evidence for the emergence of interesting linguistic properties in the RL agent protocol that are distinct from the other two.
APA, Harvard, Vancouver, ISO, and other styles
44

Kronmüller, Edmundo, Ira Noveck, Natalia Rivera, Francisco Jaume-Guazzini, and Dale Barr. "The positive side of a negative reference: the delay between linguistic processing and common ground." Royal Society Open Science 4, no. 2 (February 2017): 160827. http://dx.doi.org/10.1098/rsos.160827.

Full text
Abstract:
Interlocutors converge on names to refer to entities. For example, a speaker might refer to a novel looking object as the jellyfish and, once identified, the listener will too. The hypothesized mechanism behind such referential precedents is a subject of debate. The common ground view claims that listeners register the object as well as the identity of the speaker who coined the label. The linguistic view claims that, once established, precedents are treated by listeners like any other linguistic unit, i.e. without needing to keep track of the speaker. To test predictions from each account, we used visual-world eyetracking, which allows observations in real time, during a standard referential communication task. Participants had to select objects based on instructions from two speakers. In the critical condition, listeners sought an object with a negative reference such as not the jellyfish . We aimed to determine the extent to which listeners rely on the linguistic input, common ground or both. We found that initial interpretations were based on linguistic processing only and that common ground considerations do emerge but only after 1000 ms. Our findings support the idea that—at least temporally—linguistic processing can be isolated from common ground.
APA, Harvard, Vancouver, ISO, and other styles
45

Gijbels, Liesbeth, Jason D. Yeatman, Kaylah Lalonde, and Adrian KC Lee. "Children’s age matters, but not for audiovisual speech enhancement." Journal of the Acoustical Society of America 150, no. 4 (October 2021): A337. http://dx.doi.org/10.1121/10.0008500.

Full text
Abstract:
Articulation movements help us identify speech in noisy environments. While this has been observed at almost all ages, the size of the perceived benefit and its relationship to development in children is less understood. Here, we focus on exploring audiovisual speech benefit in typically developing children (N = 160) across a wide age range (4–15 years) by measuring performance via an online audiovisual speech performance task that is low in cognitive and linguistic demands. Specifically, we investigated how audiovisual speech benefit develops over age and the impact of some potentially important intrinsic (e.g., gender, phonological skills) and extrinsic (e.g., choice of stimuli) experimental factors. Our results show an increased performance of individual modalities (audio-only, audiovisual, visual-only) as a function of age, but no difference in the size of audiovisual speech enhancement. Furthermore, older children showed a significant impact of visually distracting stimuli (e.g., mismatched video), where this had no additional impact on performance of the youngest children. No phonological or gender differences were found given the low cognitive and linguistic demands of this task.
APA, Harvard, Vancouver, ISO, and other styles
46

Kweldju, Siusana. "USING GEOSEMIOTIC APPROACH, LEARNERS CREATE FOR DEVELOPING MULTIMODAL COMPETENCIES: TASK-BASED." J-ELLiT (Journal of English Language, Literature, and Teaching) 3, no. 1 (June 30, 2019): 1. http://dx.doi.org/10.17977/um046v3i1p1-11.

Full text
Abstract:
In the digital age, the notion of text has broadened to include digitally constructed multimodal texts. Meaning-making in everyday life is not only based on verbal language as the only mode, but also visual images. Students need more learning assignments and activities to develop their multimodal communication skills. To meet this need, a project utilizing linguistic landscape as a learning context is created for its rich multimodal representations. Task-based approach is adopted to facilitate a triple-track solution: improving students’ general English and display English proficiency, raising the students’ multimodal literacy, and developing their collaborative skill. The reason to employ task-based approach is because it strengthens the learners’ opportunity to do real-world-relevant projects to promote both their language acquisition and their collaborative skills. The project is completed when learners in teams give their presentations as their learning out comes.
APA, Harvard, Vancouver, ISO, and other styles
47

Panchakshari, Abhishek Budiguppe, Getcy Bebayal, and Nithyashree -. "Effect of Modality on Transfer of Linguistic Stimuli from Short-Term to Long Term Memory: Evidence on Immediate and Delayed Recall." World Journal of Education and Humanities 5, no. 1 (January 7, 2023): p1. http://dx.doi.org/10.22158/wjeh.v5n1p1.

Full text
Abstract:
Memory is considered as an important cognitive domain found to be important in our daily-walks of life. Short term and long term memory are considered as the main variants under memory. The information in short term memory is prone to be transferred to the long term memory through attention, practice, rehearsal. The current study aims to investigate the effect of modality on transfer of linguistic stimuli from short to long term memory. 20 neuro-typical Tamil speaking participants were recruited for the study. The participants were divided into two groups based on random sampling. Auditory task was administered on the first group where the participants were presented with sentences and were asked to remember the key/content word. While auditory plus visual task was administered on the second group of participants. Recall of key/content words was tested at the level of immediate and delayed recall conditions. On immediate recall condition, there was no difference between the two groups but on delayed recall condition, modality of stimulus presentation had a significant role as the group presented with auditory stimulus performed well compared to the group presented with auditory plus visual modalities
APA, Harvard, Vancouver, ISO, and other styles
48

Kutlu, Ethan, Alexandra Fell, Keith Apfelbaum, and Bob McMurray. "Effects of multilingual and monolingual social networks on speech perception." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A236. http://dx.doi.org/10.1121/10.0016126.

Full text
Abstract:
Speech perception is gradient—listeners track continuous acoustic differences within a category (McMurray et al., 2022; Kapnoula & McMurray, 2022). Listeners use this gradiency to adjust subphonetic details (McMurray & Jongman, 2011), recover from ambiguity (McMurray et al., 2009), and aid learning and adaptation (McMurray & Farris-Trimble, 2012; Clayards et al., 2008). However, it is unclear whether gradiency is a developmental product of linguistic experience, particularly the variability of speech that is experienced. This ongoing project (current n = 31, planned n = 60) is testing school-aged children (6–11 years old) using the visual analogue scaling task (Kong & Edwards, 2011). Children hear tokens from a speech continuum (e.g., beach/peach) and make continuous ratings about how /b/- or /p/- like the sound is. This is related to social network information regarding children’s language and social background. Preliminary results suggest that linguistic diversity impacts speech perception gradiency. The implications of bilingual education and linguistic environment in development will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Jiaming, Weixin Luo, Wei Zhang, and Lin Ma. "Explore Inter-contrast between Videos via Composition for Weakly Supervised Temporal Sentence Grounding." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 267–75. http://dx.doi.org/10.1609/aaai.v36i1.19902.

Full text
Abstract:
Weakly supervised temporal sentence grounding aims to temporally localize the target segment corresponding to a given natural language query, where it provides video-query pairs without temporal annotations during training. Most existing methods use the fused visual-linguistic feature to reconstruct the query, where the least reconstruction error determines the target segment. This work introduces a novel approach that explores the inter-contrast between videos in a composed video by selecting components from two different videos and fusing them into a single video. Such a straightforward yet effective composition strategy provides the temporal annotations at multiple composed positions, resulting in numerous videos with temporal ground-truths for training the temporal sentence grounding task. A transformer framework is introduced with multi-tasks training to learn a compact but efficient visual-linguistic space. The experimental results on the public Charades-STA and ActivityNet-Caption dataset demonstrate the effectiveness of the proposed method, where our approach achieves comparable performance over the state-of-the-art weakly-supervised baselines. The code is available at https://github.com/PPjmchen/Composition_WSTG.
APA, Harvard, Vancouver, ISO, and other styles
50

Bruck, Maggie. "Component Spelling Skills of College Students with Childhood Diagnoses of Dyslexia." Learning Disability Quarterly 16, no. 3 (August 1993): 171–84. http://dx.doi.org/10.2307/1511325.

Full text
Abstract:
This article examines the component spelling skills of adults with childhood diagnoses of dyslexia in an effort to identify some of the basic impairments associated with their spelling problems and to determine if these adults ever attain age- or level-appropriate competence. College students with childhood diagnoses of dyslexia were given a dictation task, a spelling-recognition task, and a nonword spelling task to assess their use and knowledge of sound-spelling, orthographic, morphologic, and visual information. Their performance on these tasks was compared to that of control groups of normal college students and of normal grade 6 (matched with the dyslexics on the basis of their standardized spelling and reading test scores). Dyslexics' spelling problems were primarily associated with their failure to acquire knowledge of the mappings between spelling and sounds of English. Their use and knowledge of morphological (higher level linguistic) information and of visual information for spelling, however, is predictable from their reading and spelling levels. This last set of results reflects the fact that the dyslexics in this study engaged in much reading and that exposure to written words plays an important role in the development of these specific component spelling skills.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography