Academic literature on the topic 'Visual and auditory languages'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visual and auditory languages.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Visual and auditory languages"

1

BURNHAM, DENIS, BENJAWAN KASISOPA, AMANDA REID, SUDAPORN LUKSANEEYANAWIN, FRANCISCO LACERDA, VIRGINIA ATTINA, NAN XU RATTANASONE, IRIS-CORINNA SCHWARZ, and DIANE WEBSTER. "Universality and language-specific experience in the perception of lexical tone and pitch." Applied Psycholinguistics 36, no. 6 (November 21, 2014): 1459–91. http://dx.doi.org/10.1017/s0142716414000496.

Full text
Abstract:
ABSTRACTTwo experiments focus on Thai tone perception by native speakers of tone languages (Thai, Cantonese, and Mandarin), a pitch–accent (Swedish), and a nontonal (English) language. In Experiment 1, there was better auditory-only and auditory–visual discrimination by tone and pitch–accent language speakers than by nontone language speakers. Conversely and counterintuitively, there was better visual-only discrimination by nontone language speakers than tone and pitch–accent language speakers. Nevertheless, visual augmentation of auditory tone perception in noise was evident for all five language groups. In Experiment 2, involving discrimination in three fundamental frequency equivalent auditory contexts, tone and pitch–accent language participants showed equivalent discrimination for normal Thai speech, filtered speech, and violin sounds. In contrast, nontone language listeners had significantly better discrimination for violin sounds than filtered speech and in turn speech. Together the results show that tone perception is determined by both auditory and visual information, by acoustic and linguistic contexts, and by universal and experiential factors.
APA, Harvard, Vancouver, ISO, and other styles
2

VÉLEZ-URIBE, IDALY, and MÓNICA ROSSELLI. "The auditory and visual appraisal of emotion-related words in Spanish–English bilinguals." Bilingualism: Language and Cognition 22, no. 1 (October 5, 2017): 30–46. http://dx.doi.org/10.1017/s1366728917000517.

Full text
Abstract:
Bilinguals experience emotions differently depending on which language they are speaking. Emotionally loaded words were expected to be appraised differently in the first versus the second language in Spanish–English bilinguals. Three categories of words (positive, negative, and taboo) were appraised in both languages in the visual and auditory sensory modalities. Positive word ratings were more positive in English than in Spanish. Negative words were judged as more negative in English than in Spanish. Taboo words were rated as more negative in Spanish than in English. Significant regression models were obtained for the visual and auditory positive words and auditory negative words with English and Spanish proficiency as the most significant predictors. Results support the view that there are differences in the appraisal of emotions in the two languages spoken by bilinguals; the direction of the difference depends on the emotion category of words, and it is influenced by language proficiency.
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Youtao, and James L. Morgan. "Homophone auditory processing in cross-linguistic perspective." Proceedings of the Linguistic Society of America 5, no. 1 (March 23, 2020): 529. http://dx.doi.org/10.3765/plsa.v5i1.4733.

Full text
Abstract:
Previous studies reported conflicting results for the effects of homophony on visual word processing across languages. On finding significant differences in homophone density in Japanese, Mandarin Chinese and English, we conducted two experiments to compare native speakers’ competence in homophone auditory processing across these three languages. A lexical decision task showed that the effect of homophony on word processing in Japanese was significantly less detrimental than in Mandarin and English. A word-learning task showed that native Japanese speakers were the fastest in learning novel homophones. These results suggest that language-intrinsic properties influence corresponding language processing abilities of native speakers.
APA, Harvard, Vancouver, ISO, and other styles
4

Brookshire, Geoffrey, Jenny Lu, Howard C. Nusbaum, Susan Goldin-Meadow, and Daniel Casasanto. "Visual cortex entrains to sign language." Proceedings of the National Academy of Sciences 114, no. 24 (May 30, 2017): 6352–57. http://dx.doi.org/10.1073/pnas.1620350114.

Full text
Abstract:
Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow (<8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at ∼1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.
APA, Harvard, Vancouver, ISO, and other styles
5

Kubicek, Claudia, Anne Hillairet de Boisferon, Eve Dupierrix, Hélène Lœvenbruck, Judit Gervain, and Gudrun Schwarzer. "Face-scanning behavior to silently-talking faces in 12-month-old infants: The impact of pre-exposed auditory speech." International Journal of Behavioral Development 37, no. 2 (February 25, 2013): 106–10. http://dx.doi.org/10.1177/0165025412473016.

Full text
Abstract:
The present eye-tracking study aimed to investigate the impact of auditory speech information on 12-month-olds’ gaze behavior to silently-talking faces. We examined German infants’ face-scanning behavior to side-by-side presentation of a bilingual speaker’s face silently speaking German utterances on one side and French on the other side, before and after auditory familiarization with one of the two languages. The results showed that 12-month-old infants showed no general visual preference for either of the visual speeches, neither before nor after auditory input. But, infants who heard native speech decreased their looking time to the mouth area and focused longer on the eyes compared to their scanning behavior without auditory language input, whereas infants who heard non-native speech increased their visual attention on the mouth region and focused less on the eyes. Thus, it can be assumed that 12-month-olds quickly identified their native language based on auditory speech and guided their visual attention more to the eye region than infants who have listened to non-native speech.
APA, Harvard, Vancouver, ISO, and other styles
6

de la Cruz-Pavía, Irene, Janet F. Werker, Eric Vatikiotis-Bateson, and Judit Gervain. "Finding Phrases: The Interplay of Word Frequency, Phrasal Prosody and Co-speech Visual Information in Chunking Speech by Monolingual and Bilingual Adults." Language and Speech 63, no. 2 (April 19, 2019): 264–91. http://dx.doi.org/10.1177/0023830919842353.

Full text
Abstract:
The audiovisual speech signal contains multimodal information to phrase boundaries. In three artificial language learning studies with 12 groups of adult participants we investigated whether English monolinguals and bilingual speakers of English and a language with opposite basic word order (i.e., in which objects precede verbs) can use word frequency, phrasal prosody and co-speech (facial) visual information, namely head nods, to parse unknown languages into phrase-like units. We showed that monolinguals and bilinguals used the auditory and visual sources of information to chunk “phrases” from the input. These results suggest that speech segmentation is a bimodal process, though the influence of co-speech facial gestures is rather limited and linked to the presence of auditory prosody. Importantly, a pragmatic factor, namely the language of the context, seems to determine the bilinguals’ segmentation, overriding the auditory and visual cues and revealing a factor that begs further exploration.
APA, Harvard, Vancouver, ISO, and other styles
7

Newman-Norlund, Roger D., Scott H. Frey, Laura-Ann Petitto, and Scott T. Grafton. "Anatomical Substrates of Visual and Auditory Miniature Second-language Learning." Journal of Cognitive Neuroscience 18, no. 12 (December 2006): 1984–97. http://dx.doi.org/10.1162/jocn.2006.18.12.1984.

Full text
Abstract:
Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca's area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca's area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the “critical period.” The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Storms, Russell L., and Michael J. Zyda. "Interactions in Perceived Quality of Auditory-Visual Displays." Presence: Teleoperators and Virtual Environments 9, no. 6 (December 2000): 557–80. http://dx.doi.org/10.1162/105474600300040385.

Full text
Abstract:
The quality of realism in virtual environments (VEs) is typically considered to be a function of visual and audio fidelity mutually exclusive of each other. However, the VE participant, being human, is multimodal by nature. Therefore, in order to validate more accurately the levels of auditory and visual fidelity that are required in a virtual environment, a better understanding is needed of the intersensory or crossmodal effects between the auditory and visual sense modalities. To identify whether any pertinent auditory-visual cross-modal perception phenomena exist, 108 subjects participated in three experiments which were completely automated using HTML, Java, and JavaScript programming languages. Visual and auditory display quality perceptions were measured intraand intermodally by manipulating the pixel resolution of the visual display and Gaussian white noise level, and by manipulating the sampling frequency of the auditory display and Gaussian white noise level. Statistically significant results indicate that high-quality auditory displays coupled with highquality visual displays increase the quality perception of the visual displays relative to the evaluation of the visual display alone, and that low-quality auditory displays coupled with high-quality visual displays decrease the quality perception of the auditory displays relative to the evaluation of the auditory display alone. These findings strongly suggest that the quality of realism in VEs must be a function of both auditory and visual display fidelities inclusive of each other.
APA, Harvard, Vancouver, ISO, and other styles
9

Hasenäcker, Jana, Luianta Verra, and Sascha Schroeder. "Comparing length and frequency effects in children across modalities." Quarterly Journal of Experimental Psychology 72, no. 7 (October 20, 2018): 1682–91. http://dx.doi.org/10.1177/1747021818805063.

Full text
Abstract:
Although it is well established that beginning readers rely heavily on phonological decoding, the overlap of the phonological pathways used in visual and auditory word recognition is not clear. Especially in transparent languages, phonological reading could use the same pathways as spoken word processing. In the present study, we report a direct comparison of lexical decision performance in the visual and auditory modality in beginning readers of a transparent language. Using lexical decision, we examine how marker effects of length and frequency differ in the two modalities and how these differences are modulated by reading ability. The results show that both frequency and length effects are stronger in the visual modality, and the differences in length effects between modalities are more pronounced for poorer readers than for better readers. This suggests that visual word recognition in beginning readers of a transparent language initially is based on phonological decoding and subsequent matching in the phonological lexicon, especially for poor readers. However, some orthographic processing seems to be involved already. We claim that the relative contribution of the phonological and orthographic route in beginning readers can be measured by the differences in marker effects between auditory and visual lexical decision.
APA, Harvard, Vancouver, ISO, and other styles
10

Lallier, Marie, Nicola Molinaro, Mikel Lizarazu, Mathieu Bourguignon, and Manuel Carreiras. "Amodal Atypical Neural Oscillatory Activity in Dyslexia." Clinical Psychological Science 5, no. 2 (December 21, 2016): 379–401. http://dx.doi.org/10.1177/2167702616670119.

Full text
Abstract:
It has been proposed that atypical neural oscillations in both the auditory and the visual modalities could explain why some individuals fail to learn to read and suffer from developmental dyslexia. However, the role of specific oscillatory mechanisms in reading acquisition is still under debate. In this article, we take a cross-linguistic approach and argue that both the phonological and orthographic specifics of a language (e.g., linguistic rhythm, orthographic depth) shape the oscillatory activity thought to contribute to reading development. The proposed theoretical framework should allow future research to test cross-linguistic hypotheses that will shed light on the heterogeneity of auditory and visual disorders and their underlying brain dysfunction(s) in developmental dyslexia, and inform clinical practice by helping us to diagnose dyslexia across languages.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Visual and auditory languages"

1

Spencer, Dawna. "Visual and auditory metalinguistic methods for Spanish second language acquisition." Connect online, 2008. http://library2.up.edu/theses/2008_spencerd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Erdener, Vahit Dogu, University of Western Sydney, of Arts Education and Social Sciences College, and School of Psychology. "The effect of auditory, visual and orthographic information on second language acquisition." THESIS_CAESS_PSY_Erdener_V.xml, 2002. http://handle.uws.edu.au:8081/1959.7/685.

Full text
Abstract:
The current study investigates the effect of auditory and visual speech information and orthographic information on second/foreign language (L2) acquisition. To test this, native speakers of Turkish (a language with a transparent orthography) and native speakers of Australian English (a language with an opaque orthography) were exposed to Spanish (transparent orthography) and Irish (opaque orthography) legal non-word items in four experimental conditions: auditory-only, auditory-visual, auditory-orthographic, and auditory-visual-orthographic. On each trial, Turkish and Australian English speakers were asked to produce each Spanish and Irish legal non-words. In terms of phoneme errors it was found that Turkish participants generally made less errors in Spanish than their Australian counterparts, and visual speech information generally facilitated performance. Orthographic information had an overriding effect such that there was no visual advantage once it was provided. In the orthographic conditions, Turkish speakers performed better than their Australian English counterparts with Spanish items and worse with Irish terms. In terms of native speakers' ratings of participants' productions, it was found that orthographic input improved accent. Overall the results confirm findings that visual information enhances speech production in L2 and additionally show the facilitative effects of orthographic input in L2 acquisition as a function of orthographic depth. Inter-rater reliability measures revealed that the native speaker rating procedure may be prone to individual and socio-cultural influences that may stem from internal criteria for native accents. This suggests that native speaker ratings should be treated with caution.
Master of Arts (Hons)
APA, Harvard, Vancouver, ISO, and other styles
3

Erdener, Vahit Doğu. "The effect of auditory, visual and orthographic information on second language acquisition /." View thesis View thesis, 2002. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030408.114825/index.html.

Full text
Abstract:
Thesis (MA (Hons)) -- University of Western Sydney, 2002.
"A thesis submitted in partial fulfillment of the requirements for the degree of Masters of Arts (Honours), MARCS Auditory Laboratories & School of Psychology, University of Western Sydney, May 2002" Bibliography : leaves 83-93.
APA, Harvard, Vancouver, ISO, and other styles
4

Nácar, García Loreto 1988. "Language acquisition in bilingual infants : Early language discrimination in the auditory and visual domains." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/511361.

Full text
Abstract:
Learning language is a cornerstone in the cognitive development during the first year of life. A fundamental difference between infants growing up in monolingual versus bilingual environments is the necessity of the latter to discriminate between two language systems since very early in life. To be able to learn two different languages, bilingual infants will have to perceive the regularities of each of their two languages while keeping them separated. In this thesis we explore the differences between monolingual and bilingual infants in their early language discrimination abilities as well as the strategies that arise for each group as a consequence of their adaptation to their different linguistic environments. In chapter two, we examine the capacities of monolingual and bilingual 4-month-old infants to discriminate between their native/dominant language from foreign ones in the auditory domain. Our results show that, in this context, bilingual and monolingual infants present different brain signals, both in the temporal and the frequency domain, when listening to their native language. The results pinpoint that discriminating the native language represents a higher cognitive cost for bilingual than for monolingual infants when only auditory information is available. In chapter three we explore the abilities of monolingual and bilingual 8-month-old infants to discriminate between languages in the visual domain. Here we show to infants never exposed to sign languages videos of two different sign languages and we measure their discriminatory abilities using a habituation paradigm. The results show that at this age only bilingual infants can discriminate between the two sign languages. The results of a second control study points in the direction that bilinguals exploit the information coming from the face of the signer to make the distinction. Altogether, the studies presented in this thesis investigate a fundamental ability to learn language - specially in the case of bilingual environments - which is discriminating between different languages. Compared to a monolingual environment, being exposed to a bilingual environment is characterized by receiving more information (2 languages) but with less exposure to each of the languages (on average half of the time to each of them). We argue that the developmental brain is as prepared to learn one language from birth, as it is to learn two. However, to do so, monolingual and bilingual infants will develop particular strategies that will allow them to select the relevant information from the auditory and visual domains.
La adquisición del lenguaje es una pieza fundamental en el desarrollo cognitivo durante el primer año de vida. Una diferencia fundamental entre los bebés que crecen en ambientes monolingües y bilingües es que estos últimos necesitan discriminar entre dos sistemas lingüísticos desde muy temprano en la vida. Para poder aprender dos idiomas, los bebés bilingües tienen que percibir las regularidades de cada uno de sus idiomas y a la vez mantenerlos separados. En esta tesis exploramos las diferencias entre bebés monolingües y bilingües tanto en sus capacidades de discriminación tempranas, como en las estrategias que desarrolla cada grupo como consecuencia de la adaptación a su entorno lingüístico. En el segundo capítulo, examinamos la capacidad de los bebés bilingües y monolingües a los 4 meses de edad para discriminar entre la lengua nativa/dominante de otra extranjera en el dominio auditivo. Nuestros resultados muestran que, en este contexto, los bebés monolingües y bilingües presentan diferentes señales auditivas cuando escuchan su lengua nativa. Los resultados señalan que discriminar la lengua nativa representa un coste cognitivo mayor para los bebés bilingües que para los monolingües cuando sólo sólo disponen de información auditiva. En el capítulo 3, exploramos las habilidades de los bebés monolingües y bilingües a los 8 meses de edad para discriminar lenguas en el dominio visual. Aquí, mostramos a bebés que nunca han sido expuestos a lengua de signos, videos de dos lenguas de signos diferentes y medimos sus habilidades discriminatorias usando un paradigma de habituación. Los resultados muestran que a esta edad sólo los bebés bilingües son capaces de hacer la distinción y apuntan que para ello aprovechan la información proveniente de la cara de la signante.
APA, Harvard, Vancouver, ISO, and other styles
5

Greenwood, Toni Elspeth. "Auditory language comprehension, and sequential interference in working memory following sustained visual attention /." Title page, contents and abstract only, 2001. http://web4.library.adelaide.edu.au/theses/09ARPS/09arpsg8166.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wroblewski, Marcin. "Developmental predictors of auditory-visual integration of speech in reverberation and noise." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/6017.

Full text
Abstract:
Objectives: Elementary school classrooms that meet the acoustic requirements for near-optimum speech recognition are extremely scarce. Poor classroom acoustics may become a barrier to speech understanding as children enter school. The purpose of this study was threefold: 1) to quantify the extent to which reverberation, lexical difficulty, and presentation mode affect speech recognition in noise, 2) to examine to what extent auditory-visual (AV) integration assists with the recognition of speech in noisy and reverberant environments typical of elementary school classrooms, 3) to understand the relationship between developing mechanisms of multisensory integration and the concurrently developing linguistic and cognitive abilities. Design: Twenty-seven typically developing children and 9 young adults participated. Participants repeated short sentences reproduced by 10 speakers on a 30” HDTV and/or over loudspeakers located around the listener in a simulated classroom environment. Signal-to-noise ratio (SNR) for 70 (SNR70) and 30 (SNR30) percent correct performance were measured using an adaptive tracking procedure. Auditory-visual integration was assessed via the SNR difference between AV and auditory-only (AO) conditions, labeled speech-reading benefit (SRB). Linguistic and cognitive aptitude was assessed using the NIH-Toolbox: Cognition Battery (NIH-TB: CB). Results: Children required more favorable SNRs for equivalent performance when compared to adults. Participants benefited from the reduction in lexical difficulty, and in most cases the reduction in reverberation time. Reverberation affected children’s speech recognition in AO condition and adults in AV condition. At SNR30, SRB was greater than that at SNR70. Adults showed marginally significant increase in AV integration relative to children. Adults also showed increase in SRB for lexically hard versus easy words, at high level of reverberation. Development of linguistic and cognitive aptitude accounts for approximately 35% of the variance in AV integration, with crystalized and fluid cognition composite scores identified as strongest predictors. Conclusions: The results of this study add to the body of evidence in support of children requiring more favorable SNRs to perform the same speech recognition tasks as adults in simulated listening environments akin to school classrooms. Our findings shed light on the development of AV integration for speech recognition in noise and reverberation during the school years, and provide insight into the balance of cognitive and linguistic underpinnings necessary for AV integration of degraded speech.
APA, Harvard, Vancouver, ISO, and other styles
7

Rybarczyk, Aubrey Rachel. "Weighting of Visual and Auditory Stimuli in Children with Autism Spectrum Disorders." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1459977848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bosworth, Rain G. "Psychophysical investigation of visual perception in deaf and hearing adults : effects of auditory deprivation and sign language experience /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC IP addresses, 2001. http://wwwlib.umi.com/cr/ucsd/fullcit?p3015850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pénicaud, Sidonie. "Insights about age of language exposure and brain development : a voxel-based morphometry approach." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=111591.

Full text
Abstract:
Early language experience is thought to be essential to develop a high level of linguistic proficiency in adulthood. Impoverished language input during childhood has been found to lead to functional changes in the brain. In this study, we explored if delayed exposure to a first language modulates the neuroanatomical development of the brain. To do so, voxel-based morphometry (VBM) was carried out in a group of congenitally deaf individuals varying in the age of first exposure to American Sign Language (ASL). To explore a secondary question about the effect of auditory deprivation on structural brain development, a second VBM analysis compared deaf individuals to matched hearing controls. The results show that delayed exposure to sign language is associated with a decrease in grey-matter concentration in the visual cortex close to an area found to show functional reorganization related to delayed exposure to language, while auditory deprivation is associated with a decrease in white matter in the right primary auditory cortex. These findings suggest that a lack of early language experience alters the anatomical organization of the brain.
APA, Harvard, Vancouver, ISO, and other styles
10

Lima, Fernanda Leitão de Castro Nunes de [UNESP]. "Julgamento perceptivo-auditivo e perceptivo-visual das produções gradientes de fricativas coronais surdas." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/154302.

Full text
Abstract:
Submitted by FERNANDA LEITAO DE CASTRO NUNES DE LIMA (fernandaleitao@live.com) on 2018-06-19T04:27:15Z No. of bitstreams: 1 Dissertação Final.pdf: 1310670 bytes, checksum: ab7f761d3d1be439f987de5d800203cd (MD5)
Approved for entry into archive by Satie Tagara (satie@marilia.unesp.br) on 2018-06-19T14:10:24Z (GMT) No. of bitstreams: 1 lima_flcn_me_mar.pdf: 1310670 bytes, checksum: ab7f761d3d1be439f987de5d800203cd (MD5)
Made available in DSpace on 2018-06-19T14:10:24Z (GMT). No. of bitstreams: 1 lima_flcn_me_mar.pdf: 1310670 bytes, checksum: ab7f761d3d1be439f987de5d800203cd (MD5) Previous issue date: 2018-05-22
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Objetivo: O objetivo do presente estudo foi analisar a porcentagem de respostas dos juízes no julgamento perceptivo-auditivo dos áudios e no julgamento perceptivo-visual de imagens ultrassonográficas na detecção de produções gradientes das fricativas coronais surdas. Ainda, verificar se há diferenças entre essas formas de julgamento e se elas se correlacionam. Métodos: Foram selecionados 20 juízes com conhecimento sobre o processo de produção da fala, além da classificação e descrição fonética dos diferentes fonemas do Português Brasileiro (PB). Os estímulos julgados foram coletados de um banco de dados, arquivos de áudio e vídeo (imagens ultrassonográficas) relativos à produção de palavras “sapo” e “chave”, de 11 crianças falantes do PB, na faixa etária de 6 a 12 anos de idade (9 meninos e 2 meninas), com produção de fala atípica. Foi realizada uma codificação prévia dos arquivos coletados. Após instrução prévia, os juízes deveriam escolher, imediatamente à apresentação de um estímulo, uma dentre três opções dispostas na tela do computador.O procedimento experimental consistiu no julgamento dos arquivos de áudio e julgamento das imagens ultrassonográficas, executado pelo software PERCEVAL.No julgamento dos arquivos de áudio as opções eram: produção correta, incorreta ou gradiente, enquanto no julgamento das imagens ultrassonográficasas opções eram: produção de [s], produção de [∫] ou produção indiferenciada.O tempo de apresentação, o modo aleatorizado de seleção dos estímulos e o tempo de reação foram controlados automaticamente pelo software PERCEVAL. Os dados foram submetidos à análise estatística. Resultados: O julgamento de imagens propiciou uma maior identificação dos estímulos gradientes (137 estímulos) e um menor tempo de reação na realização da tarefa (média=1073,12 ms) comparativamente ao julgamento perceptivo-auditivo (80 estímulos, tempo de reação médio=3126,26 ms), ambos estatisticamente significante (p<0,00). O teste de correlação de Spearman não mostrou significância estatística para porcentagem de respostas, nem para o tempo de reação. Conclusão: O uso das imagens ultrassonográficas no julgamento é o método mais sensível para a detecção da produção gradiente na produção de fala, podendo ser utilizado como método complementar do julgamento perceptivo-auditivo na análise de fala.
Purpose: The purpose of this study was to analyze the percentage of judges' answers in the auditory-perceptual judgment of the audios and in the visual-perceptual judgment of ultrasound images in the detection of gradient productions of the voiceless coronal fricatives. Also, to verify whether there are differences between these forms of judgment and whether they correlate. Methods: 20 judges with knowledge about the speech production process, besides the phonetic classification and description of the different Brazilian Portuguese (BP) phonemes were selected. The judged stimuli were collected from a database, audio and video files (ultrasound images) related to the production of "sapo”(frog) and "chave" (key) words, of 11 BP speakers children aged from 6 to 12 years old (9 boys and 2 girls), with atypical speech production. A previous encoding of the collected files was performed. After previous instruction, the judges should choose, immediately the presentation of a stimulus, one of three options arranged on the computer screen. The experimental procedure consisted in the judgment of the audio files and judgment of the ultrasound images, executed by the PERCEVAL software. In the judgment of the audio files the options were: correct, incorrect or gradient production, while in the judgment of the ultrasound images the options were: production of [s], production of [∫] or undifferentiated production. The presentation time, the randomized mode of selection of the stimuli and the reaction time were controlled automatically by PERCEVAL software. The data were submitted to statistical analysis. Results: The judgment of images provided a greater identification of the gradient stimuli (137 stimuli) and a shorter response time (mean = 1073.12 ms) compared to the auditory-perceptual judgment (80 stimuli, mean reaction time = 3126.26 ms), both statistically significant (p <0.00). Spearman's correlation test did not show statistical significance for percentage of responses, nor for reaction time. Conclusion: The use of ultrasound images in the judgment is the most sensitive method for the detection of gradient production in speech production, and can be used as a complementary method of auditory-perceptual judgment in the speech analysis.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Visual and auditory languages"

1

Teaching writing to visual, auditory, and kinesthetic learners. Thousand Oaks: Corwin Press, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jabr, Yaḥyá ʻAbd al-Raʼūf. al-Lughah wa-al-ḥawāss. Nābulus: [s.n.], 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ando, Yoichi. Auditory and Visual Sensations. New York, NY: Springer New York, 2010. http://dx.doi.org/10.1007/b13253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

service), SpringerLink (Online, ed. Auditory and Visual Sensations. New York, NY: Springer-Verlag New York, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Press, Leonard J. Parallels between auditory & visual processing. Santa Ana, CA: Optometric Extension Program Foundation, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Storms, Russell L. Auditory-visual cross-modal perception phenomena. Monterey, Calif: Naval Postgraduate School, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chang, Shi-Kuo, Tadao Ichikawa, and Panos A. Ligomenides, eds. Visual Languages. Boston, MA: Springer US, 1987. http://dx.doi.org/10.1007/978-1-4613-1805-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

1944-, Chang S. K., Ichikawa Tadao, and Ligomenides Panos A, eds. Visual languages. New York: Plenum Press, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chang, Shi-Kuo. Visual Languages. Boston, MA: Springer US, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Evamy, Barbara. Auditory & visual discrimination exercises: A teacher's aid. [Great Britain]: B. Evamy, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Visual and auditory languages"

1

Stokoe, William C. "Visual and Auditory Orientations to Language learning." In Scientific and Humanistic Dimensions of Language, 315. Amsterdam: John Benjamins Publishing Company, 1985. http://dx.doi.org/10.1075/z.22.42sto.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ussishkin, Adam, and Alina Twist. "Auditory and visual lexical decision in Maltese." In Studies in Language Companion Series, 233–49. Amsterdam: John Benjamins Publishing Company, 2009. http://dx.doi.org/10.1075/slcs.113.16uss.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kretschmer, Laura W., and Richard R. Kretschmer. "Intervention for Children with Auditory or Visual Sensory Impairments." In The Handbook of Language and Speech Disorders, 57–98. Oxford, UK: Wiley-Blackwell, 2010. http://dx.doi.org/10.1002/9781444318975.ch3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Burnham, Denis, and Barbara Dodd. "Auditory-Visual Speech Perception as a Direct Process: The McGurk Effect in Infants and Across Languages." In Speechreading by Humans and Machines, 103–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-662-13015-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Robert-Ribes, Jordi, Jean-Luc Schwartz, and Pierre Escudier. "A Comparison of Models for Fusion of the Auditory and Visual Sensors in Speech Perception." In Integration of Natural Language and Vision Processing, 81–104. Dordrecht: Springer Netherlands, 1995. http://dx.doi.org/10.1007/978-94-009-1639-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Potapova, Rodmonga, and Vsevolod Potapov. "Auditory and Visual Recognition of Emotional Behaviour of Foreign Language Subjects (by Native and Non-native Speakers)." In Speech and Computer, 62–69. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-01931-4_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Li, and Ning Zhou. "Tonal Languages and Cochlear Implants." In Auditory Prostheses, 341–64. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-9434-9_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chang, Shi-Kuo. "Introduction: Visual Languages and Iconic Languages." In Visual Languages, 1–7. Boston, MA: Springer US, 1986. http://dx.doi.org/10.1007/978-1-4613-1805-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hirakawa, Masahito, Noriaki Monden, Iwao Yoshimoto, Minoru Tanaka, and Tadao Ichikawa. "Hi-Visual." In Visual Languages, 233–59. Boston, MA: Springer US, 1986. http://dx.doi.org/10.1007/978-1-4613-1805-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vatikiotis-Bateson, Eric, and Kevin G. Munhall. "Auditory-Visual Speech Processing." In The Handbook of Speech Production, 178–99. Hoboken, NJ: John Wiley & Sons, Inc, 2015. http://dx.doi.org/10.1002/9781118584156.ch9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Visual and auditory languages"

1

Gable, Thomas M., Brianna Tomlinson, Stanley Cantrell, and Bruce N. Walker. "Spindex and Spearcons in Mandarin: Auditory Menu Enhancements Successful in A Tonal Language." In The 23rd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2017. http://dx.doi.org/10.21785/icad2017.025.

Full text
Abstract:
Auditory displays have been used extensively to enhance visual menus across diverse settings for various reasons. While standard auditory displays can be effective and help users across these settings, standard auditory displays often consist of text to speech cues, which can be time intensive to use. Advanced auditory cues including spindex and spearcon cues have been developed to help address this slow feedback issue. While these cues are most often used in English, they have also been applied to other languages, but research on using them in tonal languages, which may affect the ability to use them, is lacking. The current research investigated the use of spindex and spearcon cues in Mandarin, to determine their effectiveness in a tonal language. The results suggest that the cues can be effectively applied and used in a tonal language by untrained novices. This opens the door to future use of the cues in languages that reach a large portion of the world’s population.
APA, Harvard, Vancouver, ISO, and other styles
2

Stenger, I., and T. Avgustinova. "VISUAL VS. AUDITORY PERCEPTION OF BULGARIAN STIMULI BY RUSSIAN NATIVE SPEAKERS." In International Conference on Computational Linguistics and Intellectual Technologies "Dialogue". Russian State University for the Humanities, 2020. http://dx.doi.org/10.28995/2075-7182-2020-19-684-695.

Full text
Abstract:
This study contributes to a better understanding of receptive multilingualism by determining similarities and differences in successful processing of written and spoken cognate words in an unknown but (closely) related language. We investigate two Slavic languages with regard to their mutual intelligibility. The current focus is on the recognition of isolated Bulgarian words by Russian native speakers in a cognate guessing task, considering both written and audio stimuli. The experimentally obtained intercomprehension scores show a generally high degree of intelligibility of Bulgarian cognates to Russian subjects, as well as processing difficulties in case of visual vs. auditory perception. In search of an explanation, we examine the linguistic factors that can contribute to various degrees of written and spoken word intelligibility. The intercomprehension scores obtained in the online word translation experiments are correlated with (i) the identical and mismatched correspondences on the orthographic and phonetic level, (ii) the word length of the stimuli, and (iii) the frequency of Russian cognates. Additionally we validate two measuring methods: the Levenshtein distance and the word adaptation surprisal as potential pr
APA, Harvard, Vancouver, ISO, and other styles
3

Massaro, Dominic W., and Michael M. Cohen. "Auditory/visual speech in multimodal human interfaces." In 3rd International Conference on Spoken Language Processing (ICSLP 1994). ISCA: ISCA, 1994. http://dx.doi.org/10.21437/icslp.1994-135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mixdorff, Hansjörg, Angelika Hönemann, Albert Rilliard, Tan Lee, and Matthew Ma. "Cross-Language Perception of Audio-visual Attitudinal Expressions." In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cruz, Marisa, Marc Swerts, and Sónia Frota. "Do visual cues to interrogativity vary between language modalities? Evidence from spoken Portuguese and Portuguese Sign Language." In The 15th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/avsp.2019-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Öster, Anne-Marie. "Spoken L2 teaching with contrastive visual and auditory feedback." In 5th International Conference on Spoken Language Processing (ICSLP 1998). ISCA: ISCA, 1998. http://dx.doi.org/10.21437/icslp.1998-765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nunnemann, Eva Maria, Kirsten Bergmann, Helene Kreysa, and Pia Knoeferle. "Referential Gaze Makes a Difference in Spoken Language Comprehension: Human Speaker vs. Virtual Agent Listener Gaze." In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sanjanaashree P, Anand Kumar M, and Soman K.P. "Language learning for visual and auditory learners using scratch toolkit." In 2014 International Conference on Computer Communication and Informatics (ICCCI). IEEE, 2014. http://dx.doi.org/10.1109/iccci.2014.6921765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sekiyama, Kaoru, and Yoichi Sugita. "Auditory-visual speech perception examined by brain imaging and reaction time." In 7th International Conference on Spoken Language Processing (ICSLP 2002). ISCA: ISCA, 2002. http://dx.doi.org/10.21437/icslp.2002-428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nouza, Jan. "Computer-aided spoken-language training with enhanced visual and auditory feedback." In 6th European Conference on Speech Communication and Technology (Eurospeech 1999). ISCA: ISCA, 1999. http://dx.doi.org/10.21437/eurospeech.1999-49.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Visual and auditory languages"

1

Yu, Wanchi. Implicit Learning of Children with and without Developmental Language Disorder across Auditory and Visual Categories. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.7460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Visram, Anisa, Iain Jackson, Ibrahim Almufarrij, Michael Stone, and Kevin Munro. Comparing visual reinforcement audiometry outcomes using different auditory stimuli and visual rewards. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, January 2021. http://dx.doi.org/10.37766/inplasy2021.1.0080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Richardson, James. Auditory and Visual Sensory Stores: a Recognition Task. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.1557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Driesen, Jacob. Differential Effects of Visual and Auditory Presentation on Logical Reasoning. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.2546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brady-Herbst, Brenene. An Analysis of Spondee Recognition Thresholds in Auditory-only and Audio-visual Conditions. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.7094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Harsh, John R. Auditory and Visual Evoked Potentials as a Function of Sleep Deprivation and Irregular Sleep. Fort Belvoir, VA: Defense Technical Information Center, August 1989. http://dx.doi.org/10.21236/ada228488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jokeit, H., R. Goertzl, E. Kuchleri, and S. Makeig. Event-Related Changes in the 40 Hz Electroencephalogram in Auditory and Visual Reaction Time Tasks. Fort Belvoir, VA: Defense Technical Information Center, January 1994. http://dx.doi.org/10.21236/ada379543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Davis, Bradley M. Effects of Visual, Auditory, and Tactile Navigation Cues on Navigation Performance, Situation Awareness, and Mental Workload. Fort Belvoir, VA: Defense Technical Information Center, February 2007. http://dx.doi.org/10.21236/ada463244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yatsymirska, Mariya. SOCIAL EXPRESSION IN MULTIMEDIA TEXTS. Ivan Franko National University of Lviv, February 2021. http://dx.doi.org/10.30970/vjo.2021.49.11072.

Full text
Abstract:
The article investigates functional techniques of extralinguistic expression in multimedia texts; the effectiveness of figurative expressions as a reaction to modern events in Ukraine and their influence on the formation of public opinion is shown. Publications of journalists, broadcasts of media resonators, experts, public figures, politicians, readers are analyzed. The language of the media plays a key role in shaping the worldview of the young political elite in the first place. The essence of each statement is a focused thought that reacts to events in the world or in one’s own country. The most popular platform for mass information and social interaction is, first of all, network journalism, which is characterized by mobility and unlimited time and space. Authors have complete freedom to express their views in direct language, including their own word formation. Phonetic, lexical, phraseological and stylistic means of speech create expression of the text. A figurative word, a good aphorism or proverb, a paraphrased expression, etc. enhance the effectiveness of a multimedia text. This is especially important for headlines that simultaneously inform and influence the views of millions of readers. Given the wide range of issues raised by the Internet as a medium, research in this area is interdisciplinary. The science of information, combining language and social communication, is at the forefront of global interactions. The Internet is an effective source of knowledge and a forum for free thought. Nonlinear texts (hypertexts) – «branching texts or texts that perform actions on request», multimedia texts change the principles of information collection, storage and dissemination, involving billions of readers in the discussion of global issues. Mastering the word is not an easy task if the author of the publication is not well-read, is not deep in the topic, does not know the psychology of the audience for which he writes. Therefore, the study of media broadcasting is an important component of the professional training of future journalists. The functions of the language of the media require the authors to make the right statements and convincing arguments in the text. Journalism education is not only knowledge of imperative and dispositive norms, but also apodictic ones. In practice, this means that there are rules in media creativity that are based on logical necessity. Apodicticity is the first sign of impressive language on the platform of print or electronic media. Social expression is a combination of creative abilities and linguistic competencies that a journalist realizes in his activity. Creative self-expression is realized in a set of many important factors in the media: the choice of topic, convincing arguments, logical presentation of ideas and deep philological education. Linguistic art, in contrast to painting, music, sculpture, accumulates all visual, auditory, tactile and empathic sensations in a universal sign – the word. The choice of the word for the reproduction of sensory and semantic meanings, its competent use in the appropriate context distinguishes the journalist-intellectual from other participants in forums, round tables, analytical or entertainment programs. Expressive speech in the media is a product of the intellect (ability to think) of all those who write on socio-political or economic topics. In the same plane with him – intelligence (awareness, prudence), the first sign of which (according to Ivan Ogienko) is a good knowledge of the language. Intellectual language is an important means of organizing a journalistic text. It, on the one hand, logically conveys the author’s thoughts, and on the other – encourages the reader to reflect and comprehend what is read. The richness of language is accumulated through continuous self-education and interesting communication. Studies of social expression as an important factor influencing the formation of public consciousness should open up new facets of rational and emotional media broadcasting; to trace physical and psychological reactions to communicative mimicry in the media. Speech mimicry as one of the methods of disguise is increasingly becoming a dangerous factor in manipulating the media. Mimicry is an unprincipled adaptation to the surrounding social conditions; one of the most famous examples of an animal characterized by mimicry (change of protective color and shape) is a chameleon. In a figurative sense, chameleons are called adaptive journalists. Observations show that mimicry in politics is to some extent a kind of game that, like every game, is always conditional and artificial.
APA, Harvard, Vancouver, ISO, and other styles
10

Beiker, Sven, ed. Unsettled Issues Regarding Visual Communication Between Automated Vehicles and Other Road Users. SAE International, July 2021. http://dx.doi.org/10.4271/epr2021016.

Full text
Abstract:
As automated road vehicles begin their deployment into public traffic, and they will need to interact with human driven vehicles, pedestrians, bicyclists, etc. This requires some form of communication between those automated vehicles (AVs) and other road users. Some of these communication modes (e.g., auditory, motion) were discussed in “Unsettled Issues Regarding Communication of Automated Vehicles with Other Road Users.” Unsettled Issues Regarding Visual Communication Between Automated Vehicles and Other Road Users focuses on sisual communication and its balance of reach, clarity, and intuitiveness. This report discusses the different modes of visual communication (such a simple lights and rich text) and how they can be used for communication between AVs and other road users. A particular emphasis is put on standardization to highlight how uniformity and mass adoption increases efficacy of communications means.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography