Articoli di riviste sul tema "Visual and auditory languages"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Visual and auditory languages.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Visual and auditory languages".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

BURNHAM, DENIS, BENJAWAN KASISOPA, AMANDA REID, SUDAPORN LUKSANEEYANAWIN, FRANCISCO LACERDA, VIRGINIA ATTINA, NAN XU RATTANASONE, IRIS-CORINNA SCHWARZ e DIANE WEBSTER. "Universality and language-specific experience in the perception of lexical tone and pitch". Applied Psycholinguistics 36, n. 6 (21 novembre 2014): 1459–91. http://dx.doi.org/10.1017/s0142716414000496.

Testo completo
Abstract (sommario):
ABSTRACTTwo experiments focus on Thai tone perception by native speakers of tone languages (Thai, Cantonese, and Mandarin), a pitch–accent (Swedish), and a nontonal (English) language. In Experiment 1, there was better auditory-only and auditory–visual discrimination by tone and pitch–accent language speakers than by nontone language speakers. Conversely and counterintuitively, there was better visual-only discrimination by nontone language speakers than tone and pitch–accent language speakers. Nevertheless, visual augmentation of auditory tone perception in noise was evident for all five language groups. In Experiment 2, involving discrimination in three fundamental frequency equivalent auditory contexts, tone and pitch–accent language participants showed equivalent discrimination for normal Thai speech, filtered speech, and violin sounds. In contrast, nontone language listeners had significantly better discrimination for violin sounds than filtered speech and in turn speech. Together the results show that tone perception is determined by both auditory and visual information, by acoustic and linguistic contexts, and by universal and experiential factors.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

VÉLEZ-URIBE, IDALY, e MÓNICA ROSSELLI. "The auditory and visual appraisal of emotion-related words in Spanish–English bilinguals". Bilingualism: Language and Cognition 22, n. 1 (5 ottobre 2017): 30–46. http://dx.doi.org/10.1017/s1366728917000517.

Testo completo
Abstract (sommario):
Bilinguals experience emotions differently depending on which language they are speaking. Emotionally loaded words were expected to be appraised differently in the first versus the second language in Spanish–English bilinguals. Three categories of words (positive, negative, and taboo) were appraised in both languages in the visual and auditory sensory modalities. Positive word ratings were more positive in English than in Spanish. Negative words were judged as more negative in English than in Spanish. Taboo words were rated as more negative in Spanish than in English. Significant regression models were obtained for the visual and auditory positive words and auditory negative words with English and Spanish proficiency as the most significant predictors. Results support the view that there are differences in the appraisal of emotions in the two languages spoken by bilinguals; the direction of the difference depends on the emotion category of words, and it is influenced by language proficiency.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Lu, Youtao, e James L. Morgan. "Homophone auditory processing in cross-linguistic perspective". Proceedings of the Linguistic Society of America 5, n. 1 (23 marzo 2020): 529. http://dx.doi.org/10.3765/plsa.v5i1.4733.

Testo completo
Abstract (sommario):
Previous studies reported conflicting results for the effects of homophony on visual word processing across languages. On finding significant differences in homophone density in Japanese, Mandarin Chinese and English, we conducted two experiments to compare native speakers’ competence in homophone auditory processing across these three languages. A lexical decision task showed that the effect of homophony on word processing in Japanese was significantly less detrimental than in Mandarin and English. A word-learning task showed that native Japanese speakers were the fastest in learning novel homophones. These results suggest that language-intrinsic properties influence corresponding language processing abilities of native speakers.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Brookshire, Geoffrey, Jenny Lu, Howard C. Nusbaum, Susan Goldin-Meadow e Daniel Casasanto. "Visual cortex entrains to sign language". Proceedings of the National Academy of Sciences 114, n. 24 (30 maggio 2017): 6352–57. http://dx.doi.org/10.1073/pnas.1620350114.

Testo completo
Abstract (sommario):
Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow (<8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at ∼1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Kubicek, Claudia, Anne Hillairet de Boisferon, Eve Dupierrix, Hélène Lœvenbruck, Judit Gervain e Gudrun Schwarzer. "Face-scanning behavior to silently-talking faces in 12-month-old infants: The impact of pre-exposed auditory speech". International Journal of Behavioral Development 37, n. 2 (25 febbraio 2013): 106–10. http://dx.doi.org/10.1177/0165025412473016.

Testo completo
Abstract (sommario):
The present eye-tracking study aimed to investigate the impact of auditory speech information on 12-month-olds’ gaze behavior to silently-talking faces. We examined German infants’ face-scanning behavior to side-by-side presentation of a bilingual speaker’s face silently speaking German utterances on one side and French on the other side, before and after auditory familiarization with one of the two languages. The results showed that 12-month-old infants showed no general visual preference for either of the visual speeches, neither before nor after auditory input. But, infants who heard native speech decreased their looking time to the mouth area and focused longer on the eyes compared to their scanning behavior without auditory language input, whereas infants who heard non-native speech increased their visual attention on the mouth region and focused less on the eyes. Thus, it can be assumed that 12-month-olds quickly identified their native language based on auditory speech and guided their visual attention more to the eye region than infants who have listened to non-native speech.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

de la Cruz-Pavía, Irene, Janet F. Werker, Eric Vatikiotis-Bateson e Judit Gervain. "Finding Phrases: The Interplay of Word Frequency, Phrasal Prosody and Co-speech Visual Information in Chunking Speech by Monolingual and Bilingual Adults". Language and Speech 63, n. 2 (19 aprile 2019): 264–91. http://dx.doi.org/10.1177/0023830919842353.

Testo completo
Abstract (sommario):
The audiovisual speech signal contains multimodal information to phrase boundaries. In three artificial language learning studies with 12 groups of adult participants we investigated whether English monolinguals and bilingual speakers of English and a language with opposite basic word order (i.e., in which objects precede verbs) can use word frequency, phrasal prosody and co-speech (facial) visual information, namely head nods, to parse unknown languages into phrase-like units. We showed that monolinguals and bilinguals used the auditory and visual sources of information to chunk “phrases” from the input. These results suggest that speech segmentation is a bimodal process, though the influence of co-speech facial gestures is rather limited and linked to the presence of auditory prosody. Importantly, a pragmatic factor, namely the language of the context, seems to determine the bilinguals’ segmentation, overriding the auditory and visual cues and revealing a factor that begs further exploration.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Newman-Norlund, Roger D., Scott H. Frey, Laura-Ann Petitto e Scott T. Grafton. "Anatomical Substrates of Visual and Auditory Miniature Second-language Learning". Journal of Cognitive Neuroscience 18, n. 12 (dicembre 2006): 1984–97. http://dx.doi.org/10.1162/jocn.2006.18.12.1984.

Testo completo
Abstract (sommario):
Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca's area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca's area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the “critical period.” The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Storms, Russell L., e Michael J. Zyda. "Interactions in Perceived Quality of Auditory-Visual Displays". Presence: Teleoperators and Virtual Environments 9, n. 6 (dicembre 2000): 557–80. http://dx.doi.org/10.1162/105474600300040385.

Testo completo
Abstract (sommario):
The quality of realism in virtual environments (VEs) is typically considered to be a function of visual and audio fidelity mutually exclusive of each other. However, the VE participant, being human, is multimodal by nature. Therefore, in order to validate more accurately the levels of auditory and visual fidelity that are required in a virtual environment, a better understanding is needed of the intersensory or crossmodal effects between the auditory and visual sense modalities. To identify whether any pertinent auditory-visual cross-modal perception phenomena exist, 108 subjects participated in three experiments which were completely automated using HTML, Java, and JavaScript programming languages. Visual and auditory display quality perceptions were measured intraand intermodally by manipulating the pixel resolution of the visual display and Gaussian white noise level, and by manipulating the sampling frequency of the auditory display and Gaussian white noise level. Statistically significant results indicate that high-quality auditory displays coupled with highquality visual displays increase the quality perception of the visual displays relative to the evaluation of the visual display alone, and that low-quality auditory displays coupled with high-quality visual displays decrease the quality perception of the auditory displays relative to the evaluation of the auditory display alone. These findings strongly suggest that the quality of realism in VEs must be a function of both auditory and visual display fidelities inclusive of each other.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Hasenäcker, Jana, Luianta Verra e Sascha Schroeder. "Comparing length and frequency effects in children across modalities". Quarterly Journal of Experimental Psychology 72, n. 7 (20 ottobre 2018): 1682–91. http://dx.doi.org/10.1177/1747021818805063.

Testo completo
Abstract (sommario):
Although it is well established that beginning readers rely heavily on phonological decoding, the overlap of the phonological pathways used in visual and auditory word recognition is not clear. Especially in transparent languages, phonological reading could use the same pathways as spoken word processing. In the present study, we report a direct comparison of lexical decision performance in the visual and auditory modality in beginning readers of a transparent language. Using lexical decision, we examine how marker effects of length and frequency differ in the two modalities and how these differences are modulated by reading ability. The results show that both frequency and length effects are stronger in the visual modality, and the differences in length effects between modalities are more pronounced for poorer readers than for better readers. This suggests that visual word recognition in beginning readers of a transparent language initially is based on phonological decoding and subsequent matching in the phonological lexicon, especially for poor readers. However, some orthographic processing seems to be involved already. We claim that the relative contribution of the phonological and orthographic route in beginning readers can be measured by the differences in marker effects between auditory and visual lexical decision.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Lallier, Marie, Nicola Molinaro, Mikel Lizarazu, Mathieu Bourguignon e Manuel Carreiras. "Amodal Atypical Neural Oscillatory Activity in Dyslexia". Clinical Psychological Science 5, n. 2 (21 dicembre 2016): 379–401. http://dx.doi.org/10.1177/2167702616670119.

Testo completo
Abstract (sommario):
It has been proposed that atypical neural oscillations in both the auditory and the visual modalities could explain why some individuals fail to learn to read and suffer from developmental dyslexia. However, the role of specific oscillatory mechanisms in reading acquisition is still under debate. In this article, we take a cross-linguistic approach and argue that both the phonological and orthographic specifics of a language (e.g., linguistic rhythm, orthographic depth) shape the oscillatory activity thought to contribute to reading development. The proposed theoretical framework should allow future research to test cross-linguistic hypotheses that will shed light on the heterogeneity of auditory and visual disorders and their underlying brain dysfunction(s) in developmental dyslexia, and inform clinical practice by helping us to diagnose dyslexia across languages.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Wu, Yujia, Jingwen Ma, Lei Cai, Zengjian Wang, Miao Fan, Jianping Chu, Yue Zhang e Xiuhong Li. "Brain Activity during Visual and Auditory Word Rhyming Tasks in Cantonese–Mandarin–English Trilinguals". Brain Sciences 10, n. 12 (4 dicembre 2020): 936. http://dx.doi.org/10.3390/brainsci10120936.

Testo completo
Abstract (sommario):
It is unclear whether the brain activity during phonological processing of second languages (L2) is similar to that of the first language (L1) in trilingual individuals, especially when the L1 is logographic, and the L2s are logographic and alphabetic, respectively. To explore this issue, this study examined brain activity during visual and auditory word rhyming tasks in Cantonese–Mandarin–English trilinguals. Thirty Chinese college students whose L1 was Cantonese and L2s were Mandarin and English were recruited. Functional magnetic resonance imaging (fMRI) was conducted while subjects performed visual and auditory word rhyming tasks in three languages (Cantonese, Mandarin, and English). The results revealed that in Cantonese–Mandarin–English trilinguals, whose L1 is logographic and the orthography of their L2 is the same as L1—i.e., Mandarin and Cantonese, which share the same set of Chinese characters—the brain regions for the phonological processing of L2 are different from those of L1; when the orthography of L2 is quite different from L1, i.e., English and Cantonese who belong to different writing systems, the brain regions for the phonological processing of L2 are similar to those of L1. A significant interaction effect was observed between language and modality in bilateral lingual gyri. Regions of interest (ROI) analysis at lingual gyri revealed greater activation of this region when using English than Cantonese and Mandarin in visual tasks.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Wang, Kai, e Nan Li. "ANALYSIS OF HONG KONG ZOMBIE MOVIES AUDIOVISUAL LANGUAGE IN THE 1980S". International Journal of Law, Government and Communication 7, n. 29 (1 settembre 2022): 18–26. http://dx.doi.org/10.35631/ijlgc.729002.

Testo completo
Abstract (sommario):
As a subcultural type of genre film, Hong Kong zombie films play an important role in Hong Kong films. Hong Kong zombie films through visual languages such as color, light, lens, and auditory language such as language, music, and audio create a horror atmosphere and infect the emotions of the audience. The use of audiovisual language also implies the ideological representation of the collision between China and the West in Hong Kong in the 1980s.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

BIDELMAN, GAVIN M., e SHELLEY T. HEATH. "Enhanced temporal binding of audiovisual information in the bilingual brain". Bilingualism: Language and Cognition 22, n. 04 (5 luglio 2018): 752–62. http://dx.doi.org/10.1017/s1366728918000408.

Testo completo
Abstract (sommario):
We asked whether bilinguals’ benefits reach beyond the auditory modality to benefit multisensory processing. We measured audiovisual integration of auditory and visual cues in monolinguals and bilinguals via the double-flash illusion where the presentation of multiple auditory stimuli concurrent with a single visual flash induces an illusory perception of multiple flashes. We varied stimulus onset asynchrony (SOA) between auditory and visual cues to measure the “temporal binding window” where listeners fuse a single percept. Bilinguals showed faster responses and were less susceptible to the double-flash illusion than monolinguals. Moreover, monolinguals showed poorer sensitivity in AV processing compared to bilinguals. The width of bilinguals’ AV temporal integration window was narrower than monolinguals’ for both leading and lagging SOAs (Biling.: -65–112 ms; Mono.: -193 – 112 ms). Our results suggest the plasticity afforded by speaking multiple languages enhances multisensory integration and audiovisual binding in the bilingual brain.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

ERDENER, DOĞU, e DENIS BURNHAM. "Auditory–visual speech perception in three- and four-year-olds and its relationship to perceptual attunement and receptive vocabulary". Journal of Child Language 45, n. 2 (6 giugno 2017): 273–89. http://dx.doi.org/10.1017/s0305000917000174.

Testo completo
Abstract (sommario):
AbstractDespite the body of research on auditory–visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception – lip-reading and visual influence in auditory–visual integration; (ii) the development of auditory speech perception and native language perceptual attunement; and (iii) the relationship between these and a language skill relevant at this age, receptive vocabulary. Visual speech perception skills improved even over this relatively short time period. However, regression analyses revealed that vocabulary was predicted by auditory-only speech perception, and native language attunement, but not by visual speech perception ability. The results suggest that, in contrast to infants and schoolchildren, in three- to four-year-olds the relationship between speech perception and language ability is based on auditory and not visual or auditory–visual speech perception ability. Adding these results to existing findings allows elaboration of a more complete account of the developmental course of auditory–visual speech perception.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Kubovy, Michael, e David Van Valkenburg. "Auditory and visual objects". Cognition 80, n. 1-2 (giugno 2001): 97–126. http://dx.doi.org/10.1016/s0010-0277(00)00155-4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Fine, Ione, Eva M. Finney, Geoffrey M. Boynton e Karen R. Dobkins. "Comparing the Effects of Auditory Deprivation and Sign Language within the Auditory and Visual Cortex". Journal of Cognitive Neuroscience 17, n. 10 (ottobre 2005): 1621–37. http://dx.doi.org/10.1162/089892905774597173.

Testo completo
Abstract (sommario):
To investigate neural plasticity resulting from early auditory deprivation and use of American Sign Language, we measured responses to visual stimuli in deaf signers, hearing signers, and hearing nonsigners using functional magnetic resonance imaging. We examined “compensatory hypertrophy” (changes in the responsivity/size of visual cortical areas) and “cross-modal plasticity” (changes in auditory cortex responses to visual stimuli). We measured the volume of early visual areas (V1, V2, V3, V4, and MT+). We also measured the amplitude of responses within these areas, and within the auditory cortex, to a peripheral visual motion stimulus that was attended or ignored. We found no major differences between deaf and hearing subjects in the size or responsivity of early visual areas. In contrast, within the auditory cortex, motion stimuli evoked significant responses in deaf subjects, but not in hearing subjects, in a region of the right auditory cortex corresponding to Brodmann's areas 41, 42, and 22. This hemispheric selectivity may be due to a predisposition for the right auditory cortex to process motion; earlier studies report a right hemisphere bias for auditory motion in hearing subjects. Visual responses within the auditory cortex of deaf subjects were stronger for attended than ignored stimuli, suggesting top-down processes. Hearing signers did not show visual responses in the auditory cortex, indicating that cross-modal plasticity can be attributed to auditory deprivation rather than sign language experience. The largest effects of auditory deprivation occurred within the auditory cortex rather than the visual cortex, suggesting that the absence of normal input is necessary for large-scale cortical reorganization to occur.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Miranda, Luma da Silva, Carolina Gomes da Silva, João Antônio de Moraes e Albert Rilliard. "Visual and auditory cues of assertions and questions in brazilian portuguese and Mexican Spanishy". Journal of Speech Sciences 9 (9 settembre 2020): 73–92. http://dx.doi.org/10.20396/joss.v9i00.14958.

Testo completo
Abstract (sommario):
The aim of this paper is to compare the multimodal production of questions in two different language varieties: Brazilian Portuguese and Mexican Spanish. Descriptions of the auditory and visual cues of two speech acts, assertions and questions, are presented based on Brazilian and Mexican corpora. The sentence “Como você sabe” was produced as an yes-no (echo) question and an assertion by ten speakers (five male) from Rio de Janeiro and the sentence “Apaga la tele” was produced as a yes-no question and an assertion by five speakers (three male) from Mexico City. The results show that, whereas the Brazilian Portuguese and Mexican Spanish assertions are produced with different F0 contours and different facial expressions, questions in both languages are produced with specific F0 contours but similar facial expressions. The outcome of this comparative study suggests that lowering the eyebrows, tightening the lid and wrinkling the nose can be considered question markers in both language varieties.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Serafini, Sandra, Merlise Clyde, Matt Tolson e Michael M. Haglund. "Multimodality Word-Finding Distinctions in Cortical Stimulation Mapping". Neurosurgery 73, n. 1 (23 aprile 2013): 36–47. http://dx.doi.org/10.1227/01.neu.0000429861.42394.d8.

Testo completo
Abstract (sommario):
Abstract BACKGROUND: Cortical stimulation mapping (CSM) commonly uses visual naming to determine resection margins in the dominant hemisphere of patients with epilepsy. Visual naming alone may not identify all language sites in resection-prone areas, prompting additional tasks for comprehensive language mapping. OBJECTIVE: To demonstrate word-finding distinctions between visual, auditory, and reading modalities during CSM and the percentage of modality-specific language sites within dominant hemisphere subregions. METHODS: Twenty-eight patients with epilepsy underwent CSM by the use of visual, auditory, and sentence-completion tasks. Hierarchical logistic regression analyzed errors to identify language sites and provide modality-specific percentages within subregions. RESULTS: The percentage of sites classified as language sites based on auditory naming was twice as high in anterior temporal regions compared with visual naming, marginally higher in posterior temporal areas, and comparable in parietal regions. Sentence completion was comparable to visual and auditory naming in parietal regions and lower in most temporal areas. Of 470 sites tested with both visual and auditory naming, 95 sites were distinctly auditory, whereas 48 sites were distinctly visual. The remaining sites overlapped. CONCLUSION: Distinct cortical areas were found for distinct input modalities, with language sites in anterior tip regions found most often by using auditory naming. The vulnerability of anterior temporal tip regions to resection in this population and distinct sites for each modality suggest that a multimodality approach may be needed to spare crucial language sites, if sparing those sites can be shown to significantly reduce the rate of postoperative language deficits without sacrificing seizure control.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Strenge, Hans, e Jessica Böhm. "Effects of Regular Switching between Languages during Random Number Generation". Perceptual and Motor Skills 100, n. 2 (aprile 2005): 524–34. http://dx.doi.org/10.2466/pms.100.2.524-534.

Testo completo
Abstract (sommario):
Random number generation is a task that engages working memory and executive processes within the domain of number representation. In the present study we address the role of language in number processing by switching languages during random number generation (numbers 1–9), using German (L1) and English (L2), and alternating L1/L2. Results indicate large correspondence between performance in L1 and L2. In contrast to nonswitching performance, randomization with alternating languages showed a significant increase of omitted responses, whereas the random sequences were less stereotyped, showing significantly less repetition avoidance and cycling behavior. During an intentional switch between languages, errors in language sequence appeared in 23% of responses on the average, independently of the quality of randomization but associated with a clear persistence of L2. These results indicate that random number generation is more closely linked to auditory-phonological representation of numerals than to visual arabic notation.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Burnham, Denis, Kaoru Sekiyama e Dogu Erdener. "Cross‐language auditory‐visual speech perception development". Journal of the Acoustical Society of America 123, n. 5 (maggio 2008): 3879. http://dx.doi.org/10.1121/1.2935787.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Helfer, Karen S. "Auditory and Auditory-Visual Perception of Clear and Conversational Speech". Journal of Speech, Language, and Hearing Research 40, n. 2 (aprile 1997): 432–43. http://dx.doi.org/10.1044/jslhr.4002.432.

Testo completo
Abstract (sommario):
Research has shown that speaking in a deliberately clear manner can improve the accuracy of auditory speech recognition. Allowing listeners access to visual speech cues also enhances speech understanding. Whether the nature of information provided by speaking clearly and by using visual speech cues is redundant has not been determined. This study examined how speaking mode (clear vs. conversational) and presentation mode (auditory vs. auditory-visual) influenced the perception of words within nonsense sentences. In Experiment 1, 30 young listeners with normal hearing responded to videotaped stimuli presented audiovisually in the presence of background noise at one of three signal-to-noise ratios. In Experiment 2, 9 participants returned for an additional assessment using auditory-only presentation. Results of these experiments showed significant effects of speaking mode (clear speech was easier to understand than was conversational speech) and presentation mode (auditoryvisual presentation led to better performance than did auditory-only presentation). The benefit of clear speech was greater for words occurring in the middle of sentences than for words at either the beginning or end of sentences for both auditory-only and auditory-visual presentation, whereas the greatest benefit from supplying visual cues was for words at the end of sentences spoken both clearly and conversationally. The total benefit from speaking clearly and supplying visual cues was equal to the sum of each of these effects. Overall, the results suggest that speaking clearly and providing visual speech information provide complementary (rather than redundant) information.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Han, Yueqiao, Martijn Goudbeek, Maria Mos e Marc Swerts. "Relative Contribution of Auditory and Visual Information to Mandarin Chinese Tone Identification by Native and Tone-naïve Listeners". Language and Speech 63, n. 4 (30 dicembre 2019): 856–76. http://dx.doi.org/10.1177/0023830919889995.

Testo completo
Abstract (sommario):
Speech perception is a multisensory process: what we hear can be affected by what we see. For instance, the McGurk effect occurs when auditory speech is presented in synchrony with discrepant visual information. A large number of studies have targeted the McGurk effect at the segmental level of speech (mainly consonant perception), which tends to be visually salient (lip-reading based), while the present study aims to extend the existing body of literature to the suprasegmental level, that is, investigating a McGurk effect for the identification of tones in Mandarin Chinese. Previous studies have shown that visual information does play a role in Chinese tone perception, and that the different tones correlate with variable movements of the head and neck. We constructed various tone combinations of congruent and incongruent auditory-visual materials (10 syllables with 16 tone combinations each) and presented them to native speakers of Mandarin Chinese and speakers of tone-naïve languages. In line with our previous work, we found that tone identification varies with individual tones, with tone 3 (the low-dipping tone) being the easiest one to identify, whereas tone 4 (the high-falling tone) was the most difficult one. We found that both groups of participants mainly relied on auditory input (instead of visual input), and that the auditory reliance for Chinese subjects was even stronger. The results did not show evidence for auditory-visual integration among native participants, while visual information is helpful for tone-naïve participants. However, even for this group, visual information only marginally increases the accuracy in the tone identification task, and this increase depends on the tone in question.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Chesters, Jennifer, e Riikka Möttönen. "Using audiovisual feedback during speaking". Seeing and Perceiving 25 (2012): 49. http://dx.doi.org/10.1163/187847612x646703.

Testo completo
Abstract (sommario):
The sensory systems have an important role in speech production. Monitoring sensory consequences of articulatory movements supports fluent speaking. It is well known that delayed auditory feedback disrupts fluency of speech. Also, there is some evidence that immediate visual feedback, i.e., seeing one’s own articulatory movements in a mirror, decreases the disruptive effect of delayed auditory feedback (Jones and Striemer, 2007). It is unknown whether delayed visual feedback affects fluency of speech. Here, we aimed to investigate the effects of delayed auditory, visual and audiovisual feedback on speech fluency. 20 native English speakers (with no history of speech and language problems) participated in the experiment. Participants received delayed (200 ms) or immediate auditory feedback, whilst repeating sentences. Moreover, they received either no visual feedback, immediate visual feedback or delayed visual feedback (200, 400, 600 ms). Under delayed auditory feedback, the duration of sentences was longer and number of speech errors was greater than under immediate auditory feedback, confirming that delayed auditory feedback disrupts speech. Immediate visual feedback had no effect on speech fluency. Importantly, fluency of speech was most disrupted when both auditory and visual feedback was delayed, suggesting that delayed visual feedback strengthened the disruptive effect of delayed auditory feedback. However, delayed visual feedback combined with immediate auditory feedback had no effect on speech fluency. Our findings demonstrate that although visual feedback is not available during speaking in every-day life, it can be integrated with auditory feedback and influence fluency of speech.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Bhatti, Muhammad Safdar, e Rafia Mukhtar. "Impact of Vocabulary Learning Strategies on Gender Based ESL Learners in Pakistan". REiLA : Journal of Research and Innovation in Language 2, n. 3 (27 dicembre 2020): 135–41. http://dx.doi.org/10.31849/reila.v2i3.4603.

Testo completo
Abstract (sommario):
The wide spectrum of English language compels readers to find out the exact crux of the language itself. English has won the status of international language. It has become a dire need of this age. The English language is comparatively difficult due to its pronunciation, sentence structure and vocabulary level from local languages in Pakistan. Vocabulary is the utmost aspect of learning a second language. It is the essence and soul of language. The language process depends on learning vocabulary. So the current paper investigates the impact of vocabulary learning strategies for the ESL learners in Pakistan. It was an experimental type of research. One hundred students of Grade-9 in the academic year 2019-20 participated in this study. The data was collected through test and questionnaire. The study results revealed that the students who were taught by the ESL learning techniques (semantic mapping, imaging and pics, visual and auditory, group association and word contact) performed better as compared with the students in the traditional vocabulary learning method. Female students performed better in the experimental group. ESL male learners used group association learning technique at priority, and ESL female learners used visual and auditory learning at their priority. The researchers recommend that English language teachers should use vocabulary learning strategies for teaching English to ESL learners.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

PONS, FERRAN, LLORENÇ ANDREU, MONICA SANZ-TORRENT, LUCÍA BUIL-LEGAZ e DAVID J. LEWKOWICZ. "Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment". Journal of Child Language 40, n. 3 (9 luglio 2012): 687–700. http://dx.doi.org/10.1017/s0305000912000189.

Testo completo
Abstract (sommario):
ABSTRACTSpeech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

O’Meara, Carolyn, e Asifa Majid. "Anger stinks in Seri: Olfactory metaphor in a lesser-described language". Cognitive Linguistics 31, n. 3 (27 agosto 2020): 367–91. http://dx.doi.org/10.1515/cog-2017-0100.

Testo completo
Abstract (sommario):
AbstractPrevious studies claim there are few olfactory metaphors cross-linguistically, especially compared to metaphors originating in the visual and auditory domains. We show olfaction can be a source for metaphor and metonymy in a lesser-described language that has rich lexical resources for talking about odors. In Seri, an isolate language of Mexico spoken by indigenous hunter-gatherers, we find a novel metaphor for emotion never previously described – “anger stinks”. In addition, distinct odor verbs are used metaphorically to distinguish volitional vs. non-volitional states-of-affairs. Finally, there is ample olfactory metonymy in Seri, especially prevalent in names for plants, but also found in names for insects and artifacts. This calls for a re-examination of better-known languages for the overlooked role olfaction may play in metaphor and metonymy. The Seri language illustrates how valuable data from understudied languages can be in highlighting novel ways by which people conceptualize themselves and their world.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Muh, Carrie R., Naomi D. Chou, Shervin Rahimpour, Jordan M. Komisarow, Tracy G. Spears, Herbert E. Fuchs, Sandra Serafini e Gerald A. Grant. "Cortical stimulation mapping for localization of visual and auditory language in pediatric epilepsy patients". Journal of Neurosurgery: Pediatrics 25, n. 2 (febbraio 2020): 168–77. http://dx.doi.org/10.3171/2019.8.peds1922.

Testo completo
Abstract (sommario):
OBJECTIVETo determine resection margins near eloquent tissue, electrical cortical stimulation (ECS) mapping is often used with visual naming tasks. In recent years, auditory naming tasks have been found to provide a more comprehensive map. Differences in modality-specific language sites have been found in adult patients, but there is a paucity of research on ECS language studies in pediatric patients. The goals of this study were to evaluate word-finding distinctions between visual and auditory modalities and identify which cortical subregions most often contain critical language function in a pediatric population.METHODSTwenty-one pediatric patients with epilepsy or temporal lobe pathology underwent ECS mapping using visual (n = 21) and auditory (n = 14) tasks. Fisher’s exact test was used to determine whether the frequency of errors in the stimulated trials was greater than the patient’s baseline error rate for each tested modality and subregion.RESULTSWhile the medial superior temporal gyrus was a common language site for both visual and auditory language (43.8% and 46.2% of patients, respectively), other subregions showed significant differences between modalities, and there was significant variability between patients. Visual language was more likely to be located in the anterior temporal lobe than was auditory language. The pediatric patients exhibited fewer parietal language sites and a larger range of sites overall than did adult patients in previously published studies.CONCLUSIONSThere was no single area critical for language in more than 50% of patients tested in either modality for which more than 1 patient was tested (n > 1), affirming that language function is plastic in the setting of dominant-hemisphere pathology. The high rates of language function throughout the left frontal, temporal, and anterior parietal regions with few areas of overlap between modalities suggest that ECS mapping with both visual and auditory testing is necessary to obtain a comprehensive language map prior to epileptic focus or tumor resection.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Gregersen, Tammy. "Recognizing Visual and Auditory Cues in the Detection of Foreign-Language Anxiety". TESL Canada Journal 26, n. 2 (3 giugno 2009): 46. http://dx.doi.org/10.18806/tesl.v26i2.414.

Testo completo
Abstract (sommario):
.This study examines whether nonverbal visual and/or auditory channels are more effective in detecting foreign-language anxiety. Recent research suggests that language teachers are often able to successfully decode the nonverbal behaviors indicative of foreign-language anxiety; however, relatively little is known about whether visual and/or auditory channels are more effective. To this end, a group of 36 preservice English-language teachers were asked to view videotaped oral presentations of seven beginning English-language learners under three conditions: visual only, audio only, and a combination of visual and audio in order to judge their foreign-language anxiety status. The evidence gathered through this study did not conclusively determine the channel though which foreign-language anxiety could be most accurately decoded, but it did suggest indicators in the auditory and visual modes that could lead to more successful determination of behaviors indicative of negative affect.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Siallagan, Sari Rishita, Sulastri Manurung e Juwita Boneka Sinaga. "Analysis of Figurative Language and Imagery in Taylor Swift's Songs". ANGLO-SAXON: Jurnal Ilmiah Program Studi Pendidikan Bahasa Inggris 8, n. 1 (10 ottobre 2017): 55. http://dx.doi.org/10.33373/anglo.v8i1.984.

Testo completo
Abstract (sommario):
The aim of this research is to find out the kinds of figurative language and imagery in the song lyrics of Taylor Swift’s “1989” Album. Furthermore, in this research the researcher used qualitative descriptive method. The result of the study is presented in the form of paragraphs. The researcher analyzed the songs by reading them intensively and giving attention for each line. After that, the researcher examined the figurative language and imagery of the songs lyrics. After investigating the sentence in the songs lyrics, the researcher found eight kinds of figurative languages that are used in the songs lyrics, they are personification, metaphor, hyperbole, simile, oxymoron, allusion, litotes and metonymy. Six kinds of imagery also used in the songs lyrics, they are visual imagery, auditory imagery, organic imagery, kinesthetic imagery, tactile imagery and olfactory imagery. The most dominant of figurative language used is personification and the dominant imagery used is visual imagery. Keywords: figurative language, imagery, lyrics
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Wong, Wai Leung, e Urs Maurer. "The effects of input and output modalities on language switching between Chinese and English". Bilingualism: Language and Cognition 24, n. 4 (17 marzo 2021): 719–29. http://dx.doi.org/10.1017/s136672892100002x.

Testo completo
Abstract (sommario):
AbstractLanguage control is important for bilinguals to produce words in the right language. While most previous studies investigated language control using visual stimuli with vocal responses, language control regarding auditory stimuli and manual responses was rarely examined. In the present study, an alternating language switching paradigm was used to investigate language control mechanism under two input modalities (visual and auditory) and two output modalities (manual and vocal) by measuring switch costs in both error percentage and reaction time (RT) in forty-eight Cantonese–English early bilinguals. Results showed that higher switch costs in RT were found with auditory stimuli than visual stimuli, possibly due to shorter preparation time with auditory stimuli. In addition, switch costs in RT and error percentage could be obtained not only in speaking, but also in handwriting. Therefore, language control mechanisms, such as inhibition of the non-target language, may be shared between speaking and handwriting.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

HAMBERGER, MARLA J., e WILLIAM T. SEIDEL. "Localization of cortical dysfunction based on auditory and visual naming performance". Journal of the International Neuropsychological Society 15, n. 4 (luglio 2009): 529–35. http://dx.doi.org/10.1017/s1355617709090754.

Testo completo
Abstract (sommario):
AbstractNaming is generally considered a left-hemisphere function without precise localization. However, recent cortical stimulation studies demonstrate a modality-related anatomical dissociation, in that anterior temporal stimulation disrupts auditory description naming (“auditory naming”) but not visual object naming (“visual naming”), whereas posterior temporal stimulation disrupts naming on both tasks. We hypothesized that patients with anterior temporal abnormalities would exhibit impaired auditory naming, yet normal range visual naming, whereas patients with posterior temporal abnormalities would exhibit impaired performance on both tasks. Thirty-four patients with documented anterior temporal abnormalities and 14 patients with documented posterior temporal abnormalities received both naming tests. As hypothesized, patients with anterior temporal abnormalities demonstrated impaired auditory naming, yet normal range visual naming performance. Patients with posterior temporal abnormalities were impaired in visual naming; however, auditory naming scores were intact. Although these group patterns were statistically significant, on an individual basis, auditory–visual naming asymmetries better predicted whether individual patients had anterior or posterior temporal abnormalities. These behavioral findings are generally consistent with stimulation results, suggesting that modality specificity is inherent in the organization of language, with predictable neuroanatomical correlates. Results also carry clinical implications regarding localizing dysfunction, identifying and characterizing naming deficits, and potentially, in treating neurologically based language disorders. (JINS, 2009, 15, 529–535.)
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Beadle, Julie, Jeesun Kim e Chris Davis. "Effects of Age and Uncertainty on the Visual Speech Benefit in Noise". Journal of Speech, Language, and Hearing Research 64, n. 12 (13 dicembre 2021): 5041–60. http://dx.doi.org/10.1044/2021_jslhr-20-00495.

Testo completo
Abstract (sommario):
Purpose: Listeners understand significantly more speech in noise when the talker's face can be seen (visual speech) in comparison to an auditory-only baseline (a visual speech benefit). This study investigated whether the visual speech benefit is reduced when the correspondence between auditory and visual speech is uncertain and whether any reduction is affected by listener age (older vs. younger) and how severe the auditory signal is masked. Method: Older and younger adults completed a speech recognition in noise task that included an auditory-only condition and four auditory–visual (AV) conditions in which one, two, four, or six silent talking face videos were presented. One face always matched the auditory signal; the other face(s) did not. Auditory speech was presented in noise at −6 and −1 dB signal-to-noise ratio (SNR). Results: When the SNR was −6 dB, for both age groups, the standard-sized visual speech benefit reduced as more talking faces were presented. When the SNR was −1 dB, younger adults received the standard-sized visual speech benefit even when two talking faces were presented, whereas older adults did not. Conclusions: The size of the visual speech benefit obtained by older adults was always smaller when AV correspondence was uncertain; this was not the case for younger adults. Difficulty establishing AV correspondence may be a factor that limits older adults' speech recognition in noisy AV environments. Supplemental Material https://doi.org/10.23641/asha.16879549
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Gafni, Chen, Maya Yablonski e Michal Ben-Shachar. "Morphological sensitivity generalizes across modalities". Mental Lexicon 14, n. 1 (11 novembre 2019): 37–67. http://dx.doi.org/10.1075/ml.18020.gaf.

Testo completo
Abstract (sommario):
Abstract A growing body of psycholinguistic research suggests that visual and auditory word recognition involve morphological decomposition: Individual morphemes are extracted and lexically accessed when participants are presented with multi-morphemic stimuli. This view is supported by the Morpheme Interference Effect (MIE), where responses to pseudowords that contain real morphemes are slower and less accurate than responses to pseudowords that contain invented morphemes. The MIE was previously demonstrated primarily for visually presented stimuli. Here, we examine whether individuals’ sensitivity to morphological structure generalizes across modalities. Participants performed a lexical decision task on visually and auditorily presented Hebrew stimuli, including pseudowords derived from real or invented roots. The results show robust MIEs in both modalities. We further show that visual MIE is consistently stronger than auditory MIE, both at the group level and at the individual level. Finally, the data show a significant correlation between visual and auditory MIEs at the individual level. These findings suggest that the MIE reflects a general sensitivity to morphological structure, which varies considerably across individuals, but is largely consistent across modalities within individuals. Thus, we propose that the MIE captures an important aspect of language processing, rather than a property specific to visual word recognition.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Zeelenberg, René, e Bruno R. Bocanegra. "Auditory emotional cues enhance visual perception". Cognition 115, n. 1 (aprile 2010): 202–6. http://dx.doi.org/10.1016/j.cognition.2009.12.004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Hardison, Debra M. "Visual and auditory input in second-language speech processing". Language Teaching 43, n. 1 (10 dicembre 2009): 84–95. http://dx.doi.org/10.1017/s0261444809990176.

Testo completo
Abstract (sommario):
The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on auditory-visual (AV) input has been conducted more extensively in the fields of infant speech development (e.g., Meltzoff & Kuhl 1994), adult monolingual processing (e.g., McGurk & MacDonald 1976; see reference in this timeline), and the treatment of the hearing impaired (e.g., Owens & Blazek 1985) than in L2 speech processing (Hardison 2007). In these fields, the earliest visual input was a human face on which lip movements contributed linguistic information. Subsequent research expanded the types of visual sources to include computer-animated faces or talking heads (e.g., Massaro 1998), hand-arm gestures (Gullberg 2006), and various types of electronic visual displays such as those for pitch (Chun, Hardison & Pennington 2008). Recently, neurophysiological research has shed light on the neural processing of language input, providing another direction researchers have begun to explore in L2 processing (Perani & Abutalebi 2005).
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Bellis, Teri James, e Jody Ross. "Performance of Normal Adults and Children on Central Auditory Diagnostic Tests and Their Corresponding Visual Analogs". Journal of the American Academy of Audiology 22, n. 08 (settembre 2011): 491–500. http://dx.doi.org/10.3766/jaaa.22.8.2.

Testo completo
Abstract (sommario):
Background: It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. Purpose: The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. Research Design: An experimental repeated measures design was employed. Study Sample: Participants consisted of two groups (adults, n = 10; children, n = 10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Data Collection and Analysis: Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Results: Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality × laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality × response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Conclusions: Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Fernandez, Mercedes, Juliana Acosta, Kevin Douglass, Nikita Doshi e Jaime L. Tartar. "Speaking Two Languages Enhances an Auditory but Not a Visual Neural Marker of Cognitive Inhibition". AIMS Neuroscience 1, n. 2 (2014): 145–57. http://dx.doi.org/10.3934/neuroscience.2014.2.145.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Benazzo, Sandra, e Aliyah Morgenstern. "A bilingual child’s multimodal path into negation". Gesture 14, n. 2 (31 dicembre 2014): 171–202. http://dx.doi.org/10.1075/gest.14.2.03ben.

Testo completo
Abstract (sommario):
The study of the expression of negation in longitudinal adult-child data is a privileged locus for a multimodal approach to language acquisition. In the case of bilingual language acquisition, the necessity to enter two languages at once might have an influence on the management of the visual-gestural and the auditory modalities. In order to tackle these issues, we analyze the longitudinal data of Antoine, a bilingual French/Italian child recorded separately once a month for an hour with his Italian mother and with his French father between the ages of 1;5 and 3;5. Our analyses of all his multimodal utterances with negations show that Antoine has created efficient transitional systems during his developmental path both by combining modalities and by mixing his two native languages. The visual-gestural modality is a stable resource to rely on in all the types of linguistic environments Antoine experiences. His bilingual environment could be connected to the creation of his mixed verbal productions also addressed to both French speaking and Italian speaking interlocutors. Those two transitory creative systems are efficient elements of his communicative repertoire during an important period of his language development. Gesture might therefore have a compensatory function for that little boy. It is a wonderful resource to communicate efficiently in his specific environment during his multimodal, multilingual entry into language.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Liu, Li, Xiaoxiang Deng, Danling Peng, Fan Cao, Guosheng Ding, Zhen Jin, Yawei Zeng et al. "Modality- and Task-specific Brain Regions Involved in Chinese Lexical Processing". Journal of Cognitive Neuroscience 21, n. 8 (agosto 2009): 1473–87. http://dx.doi.org/10.1162/jocn.2009.21141.

Testo completo
Abstract (sommario):
fMRI was used to examine lexical processing in native adult Chinese speakers. A 2 task (semantics and phonology) × 2 modality (visual and auditory) within-subject design was adopted. The semantic task involved a meaning association judgment and the phonological task involved a rhyming judgment to two sequentially presented words. The overall effect across tasks and modalities was used to identify seven ROIs, including the left fusiform gyrus (FG), the left superior temporal gyrus (STG), the left ventral inferior frontal gyrus (VIFG), the left middle temporal gyrus (MTG), the left dorsal inferior frontal gyrus (DIFG), the left inferior parietal lobule (IPL), and the left middle frontal gyrus (MFG). ROI analyses revealed two modality-specific areas, FG for visual and STG for auditory, and three task-specific areas, IPL and DIFG for phonology and VIFG for semantics. Greater DIFG activation was associated with conflicting tonal information between words for the auditory rhyming task, suggesting this region's role in strategic phonological processing, and greater VIFG activation was correlated with lower association between words for both the auditory and the visual meaning task, suggesting this region's role in retrieval and selection of semantic representations. The modality- and task-specific effects in Chinese revealed by this study are similar to those found in alphabetical languages. Unlike English, we found that MFG was both modality- and task-specific, suggesting that MFG may be responsible for the visuospatial analysis of Chinese characters and orthography-to-phonology integration at a syllabic level.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Gabay, Yafit, Shai Gabay, Rachel Schiff e Avishai Henik. "Visual and Auditory Interference Control of Attention in Developmental Dyslexia". Journal of the International Neuropsychological Society 26, n. 4 (15 novembre 2019): 407–17. http://dx.doi.org/10.1017/s135561771900122x.

Testo completo
Abstract (sommario):
AbstractAn accumulating body of evidence highlights the contribution of general cognitive processes, such as attention, to language-related skills.Objective:The purpose of the present study was to explore how interference control (a subcomponent of selective attention) is affected in developmental dyslexia (DD) by means of control over simple stimulus-response mappings. Furthermore, we aimed to examine interference control in adults with DD across sensory modalities.Methods:The performance of 14 dyslexic adults and 14 matched controls was compared on visual/auditory Simon tasks, in which conflict was presented in terms of an incongruent mapping between the location of a visual/auditory stimulus and the appropriate motor response.Results:In the auditory task, dyslexic participants exhibited larger Simon effect costs; namely, they showed disproportionately larger reaction times (RTs)/errors costs when the auditory stimulus and response were incongruent relative to RT/errors costs of non-impaired readers. In the visual Simon task, both groups presented Simon effect costs to the same extent.Conclusion:These results indicate that the ability to control auditory selective attention is carried out less effectively in those with DD compared with visually controlled processing. The implications of this impaired process for the language-related skills of individuals with DD are discussed.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Anderson, Carly A., Ian M. Wiggins, Pádraig T. Kitterick e Douglas E. H. Hartley. "Adaptive benefit of cross-modal plasticity following cochlear implantation in deaf adults". Proceedings of the National Academy of Sciences 114, n. 38 (14 agosto 2017): 10256–61. http://dx.doi.org/10.1073/pnas.1704785114.

Testo completo
Abstract (sommario):
It has been suggested that visual language is maladaptive for hearing restoration with a cochlear implant (CI) due to cross-modal recruitment of auditory brain regions. Rehabilitative guidelines therefore discourage the use of visual language. However, neuroscientific understanding of cross-modal plasticity following cochlear implantation has been restricted due to incompatibility between established neuroimaging techniques and the surgically implanted electronic and magnetic components of the CI. As a solution to this problem, here we used functional near-infrared spectroscopy (fNIRS), a noninvasive optical neuroimaging method that is fully compatible with a CI and safe for repeated testing. The aim of this study was to examine cross-modal activation of auditory brain regions by visual speech from before to after implantation and its relation to CI success. Using fNIRS, we examined activation of superior temporal cortex to visual speech in the same profoundly deaf adults both before and 6 mo after implantation. Patients’ ability to understand auditory speech with their CI was also measured following 6 mo of CI use. Contrary to existing theory, the results demonstrate that increased cross-modal activation of auditory brain regions by visual speech from before to after implantation is associated with better speech understanding with a CI. Furthermore, activation of auditory cortex by visual and auditory speech developed in synchrony after implantation. Together these findings suggest that cross-modal plasticity by visual speech does not exert previously assumed maladaptive effects on CI success, but instead provides adaptive benefits to the restoration of hearing after implantation through an audiovisual mechanism.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Mosina, Natalya Michailovna, e Nina Valentinovna Kazaeva. "SEMANTICS OF VISUAL PERCEPTION VERBS IN THE ERZYA-MORDVIN AND FINNISH LANGUAGES". Yearbook of Finno-Ugric Studies 15, n. 1 (2 aprile 2021): 23–33. http://dx.doi.org/10.35634/2224-9443-2021-15-1-23-33.

Testo completo
Abstract (sommario):
The subject of this paper is visual perception verbs in the Erzya-Mordvin and Finnish languages from the point of view of their semantic characteristics in comparison. Depending on the leading role of the sensory system, which, along with the visual system, plays a major role in perception, one distinguishes between auditory, tactile, olfactory and gustatory perception. This verbal group has a sensuous level of interrelations. Being verbs of perception, they are aimed at objects that have physical characteristics, whereas many of them are focused on the perception of concepts. In this regard, the verbs of perception develop a polysemy that goes in different directions. The novelty of the research lies in the comparative study of the lexical level of the Erzya-Mordvin and Finnish languages, which will allow us to tackle some theoretic aspects of Finno-Ugric linguistics in the future. The problem associated with the study of the semantics of perception verbs, or perceptual activity, is of relevance. Therefore, the purpose of this research is to describe the structure of the semantic field of verbs of one aspect of perception, namely the visual one: to determine the nuclear and peripheral verbal units using the material of the languages under study; to describe the system of meanings of verbal lexemes in the Erzya and Finnish languages, to analyze the polysemy of the studied verbal group in each of the above languages; to reveal additional semantic connotations in verbal lexemes; of particular interest is also the comparative study of the specifics of expression of the same semantic meaning in the context of far-related languages, in this case, Erzya and Finnish.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Stacey, Jemaine E., Christina J. Howard, Suvobrata Mitra e Paula C. Stacey. "Audio-visual integration in noise: Influence of auditory and visual stimulus degradation on eye movements and perception of the McGurk effect". Attention, Perception, & Psychophysics 82, n. 7 (12 giugno 2020): 3544–57. http://dx.doi.org/10.3758/s13414-020-02042-x.

Testo completo
Abstract (sommario):
AbstractSeeing a talker’s face can aid audiovisual (AV) integration when speech is presented in noise. However, few studies have simultaneously manipulated auditory and visual degradation. We aimed to establish how degrading the auditory and visual signal affected AV integration. Where people look on the face in this context is also of interest; Buchan, Paré and Munhall (Brain Research, 1242, 162–171, 2008) found fixations on the mouth increased in the presence of auditory noise whilst Wilson, Alsius, Paré and Munhall (Journal of Speech, Language, and Hearing Research, 59(4), 601–615, 2016) found mouth fixations decreased with decreasing visual resolution. In Condition 1, participants listened to clear speech, and in Condition 2, participants listened to vocoded speech designed to simulate the information provided by a cochlear implant. Speech was presented in three levels of auditory noise and three levels of visual blurring. Adding noise to the auditory signal increased McGurk responses, while blurring the visual signal decreased McGurk responses. Participants fixated the mouth more on trials when the McGurk effect was perceived. Adding auditory noise led to people fixating the mouth more, while visual degradation led to people fixating the mouth less. Combined, the results suggest that modality preference and where people look during AV integration of incongruent syllables varies according to the quality of information available.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Glumm, Monica M., Kathy L. Kehring e Timothy L. White. "Effects of Visual and Auditory Cues About Threat Location on Target Acquisition and Attention to Auditory Communications". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, n. 3 (settembre 2005): 347–51. http://dx.doi.org/10.1177/154193120504900328.

Testo completo
Abstract (sommario):
This laboratory study examined the effects of visual, spatial language, and 3-D audio cues about target location on target acquisition performance and the recall of information contained in concurrent radio communications. Two baseline conditions were also included in the analysis: no cues (baseline 1) and target presence cues only (baseline 2). In modes in which target location cues were provided, 100% of the targets presented were acquired compared to 94% in baseline 1 and 95% in baseline 2. On average, targets were acquired 1.4 seconds faster in the visual, spatial language, and 3-D audio modes than in the baseline conditions, with times in the visual and 3-D audio modes being 1 second faster than those in spatial language. Overall workload scores were lower in the 3-D audio mode than in all other conditions except the visual mode. Less information (23%) was recalled from auditory communications in baseline 1 than in the other four conditions where attention could be directed to communications between target presentations.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Hermes, Dik J. "Auditory and Visual Similarity of Pitch Contours". Journal of Speech, Language, and Hearing Research 41, n. 1 (febbraio 1998): 63–72. http://dx.doi.org/10.1044/jslhr.4101.63.

Testo completo
Abstract (sommario):
It has been shown that visual display systems of intonation can be employed beneficially in teaching intonation to persons with deafness and in teaching the intonation of a foreign language. In this paper, the question is addressed whether important audible differences between two pitch contours correspond with visually conspicuous differences between displayed pitch contours. If visual feedback of intonation is to be effective in teaching situations, such correspondence must exist. In two experiments, phoneticians rated the dissimilarity of two pitch contours. In the first experiment they rated the two pitch contours auditorily (i.e., by listening to two resynthesized utterances). In the second, they rated the same two pitch contours visually (i.e., by looking at the two contours displayed on a computer screen). The results indicate why visual feedback may be very effective in intonation training if pitch contours are displayed in such a way that only auditorily relevant features are represented.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

McDaniel, Jena, e Stephen Camarata. "Does Access to Visual Input Inhibit Auditory Development for Children With Cochlear Implants? A Review of the Evidence". Perspectives of the ASHA Special Interest Groups 2, n. 9 (gennaio 2017): 10–24. http://dx.doi.org/10.1044/persp2.sig9.10.

Testo completo
Abstract (sommario):
Purpose We review the evidence for attenuating visual input during intervention to enhance auditory development and ultimately improve spoken language outcomes in children with cochlear implants. Background Isolating the auditory sense is a long-standing tradition in many approaches for teaching children with hearing loss. However, the evidence base for this practice is surprisingly limited and not straightforward. We review four bodies of evidence that inform whether or not visual input inhibits auditory development in children with cochlear implants: (a) audiovisual benefits for speech perception and understanding for individuals with typical hearing, (b) audiovisual integration development in children with typical hearing, (c) sensory deprivation and neural plasticity, and (d) audiovisual processing in individuals with hearing loss. Conclusions Although there is a compelling theoretical rationale for reducing visual input to enhance auditory development, there is also a strong theoretical argument supporting simultaneous multisensory auditory and visual input to potentially enhance outcomes in children with hearing loss. Despite widespread and long-standing practice recommendations to limit visual input, there is a paucity of evidence supporting this recommendation and no evidence that simultaneous multisensory input is deleterious to children with cochlear implants. These findings have important implications for optimizing spoken language outcomes in children with cochlear implants.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Grant, Ken W., Virginie van Wassenhove e David Poeppel. "Detection of auditory (cross-spectral) and auditory–visual (cross-modal) synchrony". Speech Communication 44, n. 1-4 (ottobre 2004): 43–53. http://dx.doi.org/10.1016/j.specom.2004.06.004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Records, Nancy L. "A Measure of the Contribution of a Gesture to the Perception of Speech in Listeners With Aphasia". Journal of Speech, Language, and Hearing Research 37, n. 5 (ottobre 1994): 1086–99. http://dx.doi.org/10.1044/jshr.3705.1086.

Testo completo
Abstract (sommario):
The contribution of a visual source of contextual information to speech perception was measured in 12 listeners with aphasia. The three experimental conditions were: Visual-Only (referential gesture), Auditory-Only (computer-edited speech), and Audio-Visual. In a two-alternative, forced-choice task, subjects indicated which picture had been requested. The stimuli were first validated with listeners without brain damage. The listeners with aphasia were subgrouped as having high or low language comprehension based on standardized test scores. Results showed a significantly larger contribution of gestural information to the responses of the lower-comprehension subgroup. The contribution of gesture was significantly correlated with the amount of ambiguity experienced with the auditory-only information. These results show that as the auditory information becomes more ambiguous, individuals with impaired language comprehension deficits make greater use of the visual information. The results support clinical observations that speech information received without visual context is perceived differently than when received with visual context.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Dawson, P. W., P. A. Busby, C. M. McKay e G. M. Clark. "Short-Term Auditory Memory in Children Using Cochlear Implants and Its Relevance to Receptive Language". Journal of Speech, Language, and Hearing Research 45, n. 4 (agosto 2002): 789–801. http://dx.doi.org/10.1044/1092-4388(2002/064).

Testo completo
Abstract (sommario):
The aim of this study was to assess auditory sequential, short-termmemory (SSTM) performance in young children using cochlear implants (CI group) and to examine the relationship of this performance to receptive language performance. Twenty-four children, 5 to 11 years old, using the Nucleus 22-electrode cochlear implant, were tested on a number of auditory and visual tasks of SSTM. The auditory memory tasks were designed to minimize the effect of auditory discrimination ability. Stimuli were chosen that children with cochlear implants could accurately identify with a reaction time similar to that of a control group of children with normal hearing (NH group). All children were also assessed on a receptive language test and on a nonverbal intelligence scale. As expected, children using cochlear implants demonstrated poorer auditory and visual SSTM skills than their hearing peers when the stimuli were verbal or were pictures that could be readily labeled. They did not differ from their peers with normal hearing on tasks where the stimuli were less likely to be verbally encoded. An important finding was that the CI group did not appear to have a sequential memory deficit specific to the auditory modality. The difference scores (auditory minus visual memory performance) for the CI group were not significantly different from those for the NH group. SSTM performance accounted for significant variance in the receptive language performance of the CI group. However, a forward stepwise regression analysis revealed that visual spatial memory (one of the subtests of the nonverbal IQ test) was the main predictor of variance in the language scores of the children using cochlear implants.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Lalonde, Kaylah, e Lynne A. Werner. "Infants and Adults Use Visual Cues to Improve Detection and Discrimination of Speech in Noise". Journal of Speech, Language, and Hearing Research 62, n. 10 (25 ottobre 2019): 3860–75. http://dx.doi.org/10.1044/2019_jslhr-h-19-0106.

Testo completo
Abstract (sommario):
Purpose This study assessed the extent to which 6- to 8.5-month-old infants and 18- to 30-year-old adults detect and discriminate auditory syllables in noise better in the presence of visual speech than in auditory-only conditions. In addition, we examined whether visual cues to the onset and offset of the auditory signal account for this benefit. Method Sixty infants and 24 adults were randomly assigned to speech detection or discrimination tasks and were tested using a modified observer-based psychoacoustic procedure. Each participant completed 1–3 conditions: auditory-only, with visual speech, and with a visual signal that only cued the onset and offset of the auditory syllable. Results Mixed linear modeling indicated that infants and adults benefited from visual speech on both tasks. Adults relied on the onset–offset cue for detection, but the same cue did not improve their discrimination. The onset–offset cue benefited infants for both detection and discrimination. Whereas the onset–offset cue improved detection similarly for infants and adults, the full visual speech signal benefited infants to a lesser extent than adults on the discrimination task. Conclusions These results suggest that infants' use of visual onset–offset cues is mature, but their ability to use more complex visual speech cues is still developing. Additional research is needed to explore differences in audiovisual enhancement (a) of speech discrimination across speech targets and (b) with increasingly complex tasks and stimuli.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia