Tesi sul tema "Visual and auditory languages"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Visual and auditory languages.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Visual and auditory languages".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Spencer, Dawna. "Visual and auditory metalinguistic methods for Spanish second language acquisition". Connect online, 2008. http://library2.up.edu/theses/2008_spencerd.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Erdener, Vahit Dogu, University of Western Sydney, of Arts Education and Social Sciences College e School of Psychology. "The effect of auditory, visual and orthographic information on second language acquisition". THESIS_CAESS_PSY_Erdener_V.xml, 2002. http://handle.uws.edu.au:8081/1959.7/685.

Testo completo
Abstract (sommario):
The current study investigates the effect of auditory and visual speech information and orthographic information on second/foreign language (L2) acquisition. To test this, native speakers of Turkish (a language with a transparent orthography) and native speakers of Australian English (a language with an opaque orthography) were exposed to Spanish (transparent orthography) and Irish (opaque orthography) legal non-word items in four experimental conditions: auditory-only, auditory-visual, auditory-orthographic, and auditory-visual-orthographic. On each trial, Turkish and Australian English speakers were asked to produce each Spanish and Irish legal non-words. In terms of phoneme errors it was found that Turkish participants generally made less errors in Spanish than their Australian counterparts, and visual speech information generally facilitated performance. Orthographic information had an overriding effect such that there was no visual advantage once it was provided. In the orthographic conditions, Turkish speakers performed better than their Australian English counterparts with Spanish items and worse with Irish terms. In terms of native speakers' ratings of participants' productions, it was found that orthographic input improved accent. Overall the results confirm findings that visual information enhances speech production in L2 and additionally show the facilitative effects of orthographic input in L2 acquisition as a function of orthographic depth. Inter-rater reliability measures revealed that the native speaker rating procedure may be prone to individual and socio-cultural influences that may stem from internal criteria for native accents. This suggests that native speaker ratings should be treated with caution.
Master of Arts (Hons)
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Erdener, Vahit Doğu. "The effect of auditory, visual and orthographic information on second language acquisition /". View thesis View thesis, 2002. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030408.114825/index.html.

Testo completo
Abstract (sommario):
Thesis (MA (Hons)) -- University of Western Sydney, 2002.
"A thesis submitted in partial fulfillment of the requirements for the degree of Masters of Arts (Honours), MARCS Auditory Laboratories & School of Psychology, University of Western Sydney, May 2002" Bibliography : leaves 83-93.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Nácar, García Loreto 1988. "Language acquisition in bilingual infants : Early language discrimination in the auditory and visual domains". Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/511361.

Testo completo
Abstract (sommario):
Learning language is a cornerstone in the cognitive development during the first year of life. A fundamental difference between infants growing up in monolingual versus bilingual environments is the necessity of the latter to discriminate between two language systems since very early in life. To be able to learn two different languages, bilingual infants will have to perceive the regularities of each of their two languages while keeping them separated. In this thesis we explore the differences between monolingual and bilingual infants in their early language discrimination abilities as well as the strategies that arise for each group as a consequence of their adaptation to their different linguistic environments. In chapter two, we examine the capacities of monolingual and bilingual 4-month-old infants to discriminate between their native/dominant language from foreign ones in the auditory domain. Our results show that, in this context, bilingual and monolingual infants present different brain signals, both in the temporal and the frequency domain, when listening to their native language. The results pinpoint that discriminating the native language represents a higher cognitive cost for bilingual than for monolingual infants when only auditory information is available. In chapter three we explore the abilities of monolingual and bilingual 8-month-old infants to discriminate between languages in the visual domain. Here we show to infants never exposed to sign languages videos of two different sign languages and we measure their discriminatory abilities using a habituation paradigm. The results show that at this age only bilingual infants can discriminate between the two sign languages. The results of a second control study points in the direction that bilinguals exploit the information coming from the face of the signer to make the distinction. Altogether, the studies presented in this thesis investigate a fundamental ability to learn language - specially in the case of bilingual environments - which is discriminating between different languages. Compared to a monolingual environment, being exposed to a bilingual environment is characterized by receiving more information (2 languages) but with less exposure to each of the languages (on average half of the time to each of them). We argue that the developmental brain is as prepared to learn one language from birth, as it is to learn two. However, to do so, monolingual and bilingual infants will develop particular strategies that will allow them to select the relevant information from the auditory and visual domains.
La adquisición del lenguaje es una pieza fundamental en el desarrollo cognitivo durante el primer año de vida. Una diferencia fundamental entre los bebés que crecen en ambientes monolingües y bilingües es que estos últimos necesitan discriminar entre dos sistemas lingüísticos desde muy temprano en la vida. Para poder aprender dos idiomas, los bebés bilingües tienen que percibir las regularidades de cada uno de sus idiomas y a la vez mantenerlos separados. En esta tesis exploramos las diferencias entre bebés monolingües y bilingües tanto en sus capacidades de discriminación tempranas, como en las estrategias que desarrolla cada grupo como consecuencia de la adaptación a su entorno lingüístico. En el segundo capítulo, examinamos la capacidad de los bebés bilingües y monolingües a los 4 meses de edad para discriminar entre la lengua nativa/dominante de otra extranjera en el dominio auditivo. Nuestros resultados muestran que, en este contexto, los bebés monolingües y bilingües presentan diferentes señales auditivas cuando escuchan su lengua nativa. Los resultados señalan que discriminar la lengua nativa representa un coste cognitivo mayor para los bebés bilingües que para los monolingües cuando sólo sólo disponen de información auditiva. En el capítulo 3, exploramos las habilidades de los bebés monolingües y bilingües a los 8 meses de edad para discriminar lenguas en el dominio visual. Aquí, mostramos a bebés que nunca han sido expuestos a lengua de signos, videos de dos lenguas de signos diferentes y medimos sus habilidades discriminatorias usando un paradigma de habituación. Los resultados muestran que a esta edad sólo los bebés bilingües son capaces de hacer la distinción y apuntan que para ello aprovechan la información proveniente de la cara de la signante.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Greenwood, Toni Elspeth. "Auditory language comprehension, and sequential interference in working memory following sustained visual attention /". Title page, contents and abstract only, 2001. http://web4.library.adelaide.edu.au/theses/09ARPS/09arpsg8166.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Wroblewski, Marcin. "Developmental predictors of auditory-visual integration of speech in reverberation and noise". Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/6017.

Testo completo
Abstract (sommario):
Objectives: Elementary school classrooms that meet the acoustic requirements for near-optimum speech recognition are extremely scarce. Poor classroom acoustics may become a barrier to speech understanding as children enter school. The purpose of this study was threefold: 1) to quantify the extent to which reverberation, lexical difficulty, and presentation mode affect speech recognition in noise, 2) to examine to what extent auditory-visual (AV) integration assists with the recognition of speech in noisy and reverberant environments typical of elementary school classrooms, 3) to understand the relationship between developing mechanisms of multisensory integration and the concurrently developing linguistic and cognitive abilities. Design: Twenty-seven typically developing children and 9 young adults participated. Participants repeated short sentences reproduced by 10 speakers on a 30” HDTV and/or over loudspeakers located around the listener in a simulated classroom environment. Signal-to-noise ratio (SNR) for 70 (SNR70) and 30 (SNR30) percent correct performance were measured using an adaptive tracking procedure. Auditory-visual integration was assessed via the SNR difference between AV and auditory-only (AO) conditions, labeled speech-reading benefit (SRB). Linguistic and cognitive aptitude was assessed using the NIH-Toolbox: Cognition Battery (NIH-TB: CB). Results: Children required more favorable SNRs for equivalent performance when compared to adults. Participants benefited from the reduction in lexical difficulty, and in most cases the reduction in reverberation time. Reverberation affected children’s speech recognition in AO condition and adults in AV condition. At SNR30, SRB was greater than that at SNR70. Adults showed marginally significant increase in AV integration relative to children. Adults also showed increase in SRB for lexically hard versus easy words, at high level of reverberation. Development of linguistic and cognitive aptitude accounts for approximately 35% of the variance in AV integration, with crystalized and fluid cognition composite scores identified as strongest predictors. Conclusions: The results of this study add to the body of evidence in support of children requiring more favorable SNRs to perform the same speech recognition tasks as adults in simulated listening environments akin to school classrooms. Our findings shed light on the development of AV integration for speech recognition in noise and reverberation during the school years, and provide insight into the balance of cognitive and linguistic underpinnings necessary for AV integration of degraded speech.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Rybarczyk, Aubrey Rachel. "Weighting of Visual and Auditory Stimuli in Children with Autism Spectrum Disorders". The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1459977848.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Bosworth, Rain G. "Psychophysical investigation of visual perception in deaf and hearing adults : effects of auditory deprivation and sign language experience /". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC IP addresses, 2001. http://wwwlib.umi.com/cr/ucsd/fullcit?p3015850.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Pénicaud, Sidonie. "Insights about age of language exposure and brain development : a voxel-based morphometry approach". Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=111591.

Testo completo
Abstract (sommario):
Early language experience is thought to be essential to develop a high level of linguistic proficiency in adulthood. Impoverished language input during childhood has been found to lead to functional changes in the brain. In this study, we explored if delayed exposure to a first language modulates the neuroanatomical development of the brain. To do so, voxel-based morphometry (VBM) was carried out in a group of congenitally deaf individuals varying in the age of first exposure to American Sign Language (ASL). To explore a secondary question about the effect of auditory deprivation on structural brain development, a second VBM analysis compared deaf individuals to matched hearing controls. The results show that delayed exposure to sign language is associated with a decrease in grey-matter concentration in the visual cortex close to an area found to show functional reorganization related to delayed exposure to language, while auditory deprivation is associated with a decrease in white matter in the right primary auditory cortex. These findings suggest that a lack of early language experience alters the anatomical organization of the brain.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Lima, Fernanda Leitão de Castro Nunes de [UNESP]. "Julgamento perceptivo-auditivo e perceptivo-visual das produções gradientes de fricativas coronais surdas". Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/154302.

Testo completo
Abstract (sommario):
Submitted by FERNANDA LEITAO DE CASTRO NUNES DE LIMA (fernandaleitao@live.com) on 2018-06-19T04:27:15Z No. of bitstreams: 1 Dissertação Final.pdf: 1310670 bytes, checksum: ab7f761d3d1be439f987de5d800203cd (MD5)
Approved for entry into archive by Satie Tagara (satie@marilia.unesp.br) on 2018-06-19T14:10:24Z (GMT) No. of bitstreams: 1 lima_flcn_me_mar.pdf: 1310670 bytes, checksum: ab7f761d3d1be439f987de5d800203cd (MD5)
Made available in DSpace on 2018-06-19T14:10:24Z (GMT). No. of bitstreams: 1 lima_flcn_me_mar.pdf: 1310670 bytes, checksum: ab7f761d3d1be439f987de5d800203cd (MD5) Previous issue date: 2018-05-22
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Objetivo: O objetivo do presente estudo foi analisar a porcentagem de respostas dos juízes no julgamento perceptivo-auditivo dos áudios e no julgamento perceptivo-visual de imagens ultrassonográficas na detecção de produções gradientes das fricativas coronais surdas. Ainda, verificar se há diferenças entre essas formas de julgamento e se elas se correlacionam. Métodos: Foram selecionados 20 juízes com conhecimento sobre o processo de produção da fala, além da classificação e descrição fonética dos diferentes fonemas do Português Brasileiro (PB). Os estímulos julgados foram coletados de um banco de dados, arquivos de áudio e vídeo (imagens ultrassonográficas) relativos à produção de palavras “sapo” e “chave”, de 11 crianças falantes do PB, na faixa etária de 6 a 12 anos de idade (9 meninos e 2 meninas), com produção de fala atípica. Foi realizada uma codificação prévia dos arquivos coletados. Após instrução prévia, os juízes deveriam escolher, imediatamente à apresentação de um estímulo, uma dentre três opções dispostas na tela do computador.O procedimento experimental consistiu no julgamento dos arquivos de áudio e julgamento das imagens ultrassonográficas, executado pelo software PERCEVAL.No julgamento dos arquivos de áudio as opções eram: produção correta, incorreta ou gradiente, enquanto no julgamento das imagens ultrassonográficasas opções eram: produção de [s], produção de [∫] ou produção indiferenciada.O tempo de apresentação, o modo aleatorizado de seleção dos estímulos e o tempo de reação foram controlados automaticamente pelo software PERCEVAL. Os dados foram submetidos à análise estatística. Resultados: O julgamento de imagens propiciou uma maior identificação dos estímulos gradientes (137 estímulos) e um menor tempo de reação na realização da tarefa (média=1073,12 ms) comparativamente ao julgamento perceptivo-auditivo (80 estímulos, tempo de reação médio=3126,26 ms), ambos estatisticamente significante (p<0,00). O teste de correlação de Spearman não mostrou significância estatística para porcentagem de respostas, nem para o tempo de reação. Conclusão: O uso das imagens ultrassonográficas no julgamento é o método mais sensível para a detecção da produção gradiente na produção de fala, podendo ser utilizado como método complementar do julgamento perceptivo-auditivo na análise de fala.
Purpose: The purpose of this study was to analyze the percentage of judges' answers in the auditory-perceptual judgment of the audios and in the visual-perceptual judgment of ultrasound images in the detection of gradient productions of the voiceless coronal fricatives. Also, to verify whether there are differences between these forms of judgment and whether they correlate. Methods: 20 judges with knowledge about the speech production process, besides the phonetic classification and description of the different Brazilian Portuguese (BP) phonemes were selected. The judged stimuli were collected from a database, audio and video files (ultrasound images) related to the production of "sapo”(frog) and "chave" (key) words, of 11 BP speakers children aged from 6 to 12 years old (9 boys and 2 girls), with atypical speech production. A previous encoding of the collected files was performed. After previous instruction, the judges should choose, immediately the presentation of a stimulus, one of three options arranged on the computer screen. The experimental procedure consisted in the judgment of the audio files and judgment of the ultrasound images, executed by the PERCEVAL software. In the judgment of the audio files the options were: correct, incorrect or gradient production, while in the judgment of the ultrasound images the options were: production of [s], production of [∫] or undifferentiated production. The presentation time, the randomized mode of selection of the stimuli and the reaction time were controlled automatically by PERCEVAL software. The data were submitted to statistical analysis. Results: The judgment of images provided a greater identification of the gradient stimuli (137 stimuli) and a shorter response time (mean = 1073.12 ms) compared to the auditory-perceptual judgment (80 stimuli, mean reaction time = 3126.26 ms), both statistically significant (p <0.00). Spearman's correlation test did not show statistical significance for percentage of responses, nor for reaction time. Conclusion: The use of ultrasound images in the judgment is the most sensitive method for the detection of gradient production in speech production, and can be used as a complementary method of auditory-perceptual judgment in the speech analysis.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Malapetsa, Christina. "Stroop tasks with visual and auditory stimuli : How different combinations of spoken words, written words, images and natural sounds affect reaction times". Thesis, Stockholms universitet, Institutionen för lingvistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-185057.

Testo completo
Abstract (sommario):
The Stroop effect is the delay in reaction times due to interference. Since the original experiments of 1935, it has been used primarily in linguistic context. Language is a complex skill unique to humans, which involves a large part of the cerebral cortex and many subcortical regions. It is perceived primarily in auditory form (spoken) and secondarily in visual form (written), but it is also always perceived in representational form (natural sounds, images, smells, etc). Auditory signals are processed much faster than visual signals, and the language processing centres are closer to the primary auditory cortex than the primary visual cortex, but due to the integration of stimuli and the role of the executive functions, we are able to perceive both simultaneously and coherently. However, auditory signals are still processed faster, and this study focused on establishing how auditory and visual, linguistic and representational stimuli interact with each other and affect reaction times in four Stroop tasks with four archetypal mammals (dog, cat, mouse and pig): a written word against an image, a spoken word against an image, a written word against a natural sound and a spoken word against a natural sound. Four hypotheses were tested: in all tasks reaction times would be faster when the stimuli were congruent (Stroop Hypothesis); reaction times would be faster when both stimuli are auditory than when they are visual (Audiovisual Hypothesis); reaction times would be similar in the tasks where one stimulus is auditory and the other visual (Similarity Hypothesis); finally, reaction times would be slower when stimuli come from two sources than when they come from one source (Attention Hypothesis). Twelve native speakers of Swedish between the ages of 22 and 40 participated. The experiment took place in the EEG lab of the Linguistics Department of Stockholm University. The same researcher (the author) and equipment was used for all participants. The results confirmed the Stroop Hypothesis, did not confirm the Audiovisual and Similarity Hypothesis, and the results of the Attention Hypothesis were mixed. The somewhat controversial results were mostly attributed to a false initial assumption, namely that having two different auditory stimuli (one on each ear) was considered one source of stimuli, and possibly the poor quality of some natural sounds. With this additional consideration, the results seemed to be in accord with previous research. Future research could focus on more efficient ways to test the reaction times of Stroop tasks involving auditory and visual stimuli, as well as different populations, especially neurodiverse and bilingual populations.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Tabanlioglu, Selime. "The Relationship Between Learning Styles And Language Learning Strategies Of Pre-intermediate Eap Students". Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1014034/index.pdf.

Testo completo
Abstract (sommario):
This thesis aims to identify the learning styles and strategies of students, to check whether there are significant differences in the learning style and strategy preferences between male and female learners, and investigate whether there is a relationship between students&
#8217
learning style and strategy preferences. A total of 60 students were asked to complete two questionnaires. One was used to identify students&
#8217
perceptual learning style preferences and the other was used to identify students&
#8217
learning strategies. In addition, think aloud protocols were held to determine the cognitive and metacognitive strategies students used while reading. The data analysis of the first questionnaire revealed that students&
#8217
major learning style preferences were auditory learning and individual learning. Furthermore, significant difference was found in the preference of tactile learning between males and females. The analysis of the second questionnaire revealed that cognitive strategies were favoured the most. No significant difference was found in the preferences of learning strategies between males and females. The analysis with respect to the relationship between learning styles and strategies revealed that &
#8226
visual styles had a significant relation with affective strategies
&
#8226
auditory styles had significant relationships with memory, cognitive, affective, and social strategies
&
#8226
there was a significant relationship between the individual learning style and compensation strategies. &
#8226
none of the learning styles had a significant relationship with metacognitive strategies. The think aloud protocols revealed that students used various cognitive and metacognitive strategies.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Schneiders, Julia A. [Verfasser], e Axel [Akademischer Betreuer] Mecklinger. "Visual and auditory vocabulary acquisition in learning Chinese as a second language : the impact of modality-specific working memory training / Julia A. Schneiders. Betreuer: Axel Mecklinger". Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2012. http://d-nb.info/1052221815/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Martin, Maria da Graça Morais. "Ressonância magnética funcional em indivíduos normais: base de dados do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo". Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/5/5151/tde-25062009-103809/.

Testo completo
Abstract (sommario):
Introdução: Apesar do grande impacto da ressonância magnética funcional em neurociências, a sua aplicabilidade clínica ainda é pequena. Um dos principais motivos é a falta de dados populacionais para dar suporte à decisão clínica. Esta tese teve por objetivo formar um banco de dados normais, representativo de pacientes do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (HCFMUSP). Métodos: Foram estudados 64 acompanhantes normais dos pacientes do HCFMUSP. Cada indivíduo realizou tarefas quem envolviam função de linguagem, somatossensorial, motor, audiovisual e de memória em aparelho de 1,5 T. Foram colhidos dados demográficos, de desempenho e neuropsicológicos dos sujeitos e de controle de qualidade do magneto de RM. As imagens funcionais foram analisadas através do software XBAM para cada indivíduo, para os grupos e para análise de correlação comportamental. Resultados: A amostra teve uma distribução demográfica variada. Os resultados das análises de grupo mostraram padrões de acordo com a literatura. O paradigma motor mostrou efeito BOLD positivo nos giros pré e pós-centrais estendendo-se para regiões pré-motoras e parietais, área motora suplementar, áreas somatosensoriais secundárias, núcleos da base e tálamo contralaterais à mão avaliada e hemisférios cerebelares ipsilaterais. O paradigma somatossensorial das mãos demonstrou efeito BOLD positivo nos giros pré e pós-centrais, núcleos da base e tálamos contralaterais à mão estimulada, cerebelo ipsilateral à mão estimulada e o córtex somatossensorial secundário bilateralmente e o da face mostrou os giros pré e pós-centrais, o córtex parietal, regiões pré-motoras, regiões temporais posteriores e inferiores e área somatosensorial secundária bilateralmente. A análise de grupo dos paradigmas de linguagem mostrou efeito BOLD positivo no giro frontal inferior e ínsula bilateralmente, maiores à esquerda, giro frontal médio esquerdo, cíngulo anterior, área motora suplementar, cerebelo à direita e vermis cerebelar, núcleos da base e tálamos à esquerda e em particular na fluência verbal falada com apresentação de letras diferentes, lobo parietal esquerdo. No paradigma audiovisual a condição visual mostrou efeito BOLD positivo no córtex occipital, parietal e cerebelo bilateralmente e a condição auditiva, nos lobos temporais bilaterais, com extensão fronto-parietal à esquerda. A análise de grupo do paradigma memória mostrou áreas no cerebelo, córtex occipital, giro frontal médio, região frontal mesial anterior e lobo parietal, com predomínio à direita. Nos mapas individuais foram detectadas muitas regiões em cada paradigma e houve grande variabilidade, sendo as regiões cerebrais que apresentaram efeito BOLD positivo com maior frequência ( 85%): giro pré-central esquerdo (95%) e cerebelo superior direito (87%) no movimento da mão direita; giro pré-central direito (88%) no movimento da mão esquerda; giro pós-central esquerdo (88%) no estímulo somatosensorial da mão direita; giro pós-central direito (89%) no estímulo somatosensorial da mão esquerda; giro lingual direito (90%) e esquerdo (88%) no estímulo visual; e giro temporal médio direito (93%) e esquerdo (91%) na condição auditiva. As tarefas de memória e fluência verbal não tiveram nenhuma região com frequência acima de 80%. Conclusões: Os padrões de ativação cerebral obtidos nas imagens de RMf do grupo de participantes são semelhantes à literatura. A freqüência das regiões com efeito BOLD positivo foi maior nos córtices primários sensoriais e motores. As informações colhidas no trabalho constituem uma base de dados que pode ser utilizada para suporte à decisão clínica.
Introduction: Functional magnetic resonance imaging has had a great impact on neuroscience, but its clinical applicability is still small. One of the main reasons is the lack of populational databases to support clinical decision. The aim of this work was to constitute a local normal database, representative of the patients from the Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (HC-FMUSP). Methods: The sample included 64 normal subjects who, at some point, accompanied patients from the HCFMUSP. They all performed motor, somatosensory, language, audiovisual and memory paradigms in a 1,5 T magnet. Demographic, neuropsychological and behavioral data were collected. Scanner quality control was also verified. Data was analyzed through XBAM software on individual and group basis, and for behavioral correlation. Results: The sample had a variable demographic distribution. Group analysis showed results in agreement with the literature. The motor paradigm elicited positive BOLD effect in the pre and postcentral gyri, extending to premotor and parietal regions, supplementary motor area, secondary somatosensory areas, basal ganglia and thalamus contralateral to the hand in question, and ipsilateral cerebellum. Group analysis of the hand somatosensory paradigm showed pre and postcentral gyri, basal ganglia and thalamus contralateral to the stimulated hand, ipsilateral cerebellum and bilateral secondary somatosensory areas. The group analysis of the somatosensory paradigm of the face showed pre and postcentral gyri, parietal cortex, premotor areas, inferior-posterior temporal cortex and secondary somatosensory areas bilaterally. Language paradigms showed positive BOLD effect in the inferior frontal gyrus and insula bilaterally, bigger on the left, left middle frontal gyrus, anterior cingulate, supplementary motor area, right cerebellum, cerebellar vermis, left basal ganglia and thalamus, and in particular, overt verbal fluency with presentation of different letters also showed the left parietal lobe. The audiovisual paradigm group analysis showed positive BOLD effect in the occipital and parietal cortex and cerebellum bilaterally during the visual condition, and bilateral temporal with left frontal and parietal extension during the auditory condition. Finally, working memory task showed activation in the occipital cortex, cerebellum, middle frontal gyri, parietal association cortex and mesial frontal region bilaterally, with right predominance. On individual basis we detected a multitude of brain areas in each paradigm with great variability, and those with the higher frequency ( 85%) were: left precentral gyrus (95%) and superior right cerebellum (87%) during the right hand movement; right precentral gyrus (88%) during the left hand movement; left postcentral gyrus (88%) for the somatosensory stimulus of the right hand; right postcentral gyrus (89%) for the somatosensory stimulus of the left hand; right (90%) and left (88%) lingual gyri during the visual stimulus; and right (93%) and left (91%) middle temporal gyrus for the auditory stimulus. Working memory and verbal fluency had no region with a frequency above 80%. Conclusions: The patterns of cerebral activations obtained in group analysis are in agreement with the literature. Individual analysis showed a higher frequency of positive BOLD effect in the primary and sensory cortices. The data collected during this work constitute a database that can be used to support clinical decision.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Wilkie, Sonia. "Auditory manipulation of visual perception". View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/39802.

Testo completo
Abstract (sommario):
Thesis (M.A. (Hons.))--University of Western Sydney, 2008.
Thesis accompanied by CD-ROM with demonstration of possible creative applications. A thesis presented to the University of Western Sydney, College of Arts, MARCS Auditory Laboratories, in fulfilment of the requirements for the degree of Master of Arts (Honours). Includes bibliographies. Thesis minus demonstration CD-ROM also available online at: http://handle.uws.edu.au:8081/1959.7/39849.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Zhao, Hang Ph D. Massachusetts Institute of Technology. "Visual and auditory scene parsing". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122101.

Testo completo
Abstract (sommario):
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 121-132).
Scene parsing is a fundamental topic in computer vision and computational audition, where people develop computational approaches to achieve human perceptual system's ability in understanding scenes, e.g. group visual regions of an image into objects and segregate sound components in a noisy environment. This thesis investigates fully-supervised and self-supervised machine learning approaches to parse visual and auditory signals, including images, videos, and audios. Visual scene parsing refers to densely grouping and labeling of image regions into object concepts. First I build the MIT scene parsing benchmark based on a large scale, densely annotated dataset ADE20K. This benchmark, together with the state-of-the-art models we open source, offers a powerful tool for the research community to solve semantic and instance segmentation tasks. Then I investigate the challenge of parsing a large number of object categories in the wild. An open vocabulary scene parsing model which combines a convolutional neural network with a structured knowledge graph is proposed to address the challenge. Auditory scene parsing refers to recognizing and decomposing sound components in complex auditory environments. I propose a general audio-visual self-supervised learning framework that learns from a large amount of unlabeled internet videos. The learning process discovers the natural synchronization of vision and sounds without human annotation. The learned model achieves the capability to localize sound sources in videos and separate them from mixture. Furthermore, I demonstrate that motion cues in videos are tightly associated with sounds, which help in solving sound localization and separation problems.
by Hang Zhao.
Ph. D. in Mechanical Engineering and Computation
Ph.D.inMechanicalEngineeringandComputation Massachusetts Institute of Technology, Department of Mechanical Engineering
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Lee, Chung-sze Eunice. "Auditory, visual and auditory-visual contributions to the Cantonese-speaking hearing-impaired adolescents' recognition of consonants". Click to view the E-thesis via HKUTO, 1999. http://sunzi.lib.hku.hk/hkuto/record/B3621002X.

Testo completo
Abstract (sommario):
Thesis (B.Sc)--University of Hong Kong, 1999.
"A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, May 14, 1999." Also available in print.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Storms, Russell L. "Auditory-visual cross-modal perception phenomena". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA355474.

Testo completo
Abstract (sommario):
Dissertation (Ph.D. in Computer Science) Naval Postgraduate School, September 1998.
Dissertation supervisor(s): Michael J. Zyda. "September 1998." Includes bibliographical references (p. 207-222). Also Available online.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Saliba, Anthony John. "Auditory-visual integration in sound localisation". Thesis, University of Essex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249979.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Hulusić, Vedad. "Auditory-visual interaction in computer graphics". Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/47727/.

Testo completo
Abstract (sommario):
Generating high-fidelity images in real-time at reasonable frame rates, still remains one of the main challenges in computer graphics. Furthermore, visuals remain only one of the multiple sensory cues that are required to be delivered simultaneously in a multi-sensory virtual environment. The most frequently used sense, besides vision, in virtual environments and entertainment, is audio. While the rendering community focuses on solving the rendering equation more quickly using various algorithmic and hardware improvements, the exploitation of human limitations to assist in this process remain largely unexplored. Many findings in the research literature prove the existence of physical and psychological limitations of humans, including attentional, perceptual and limitations of the Human Sensory System (HSS). Knowledge of the Human Visual System (HVS) may be exploited in computer graphics to significantly reduce rendering times without the viewer being aware of any resultant image quality difference. Furthermore, cross-modal effects, that is the influence of one sensory input on another, for example sound and visuals, have also recently been shown to have a substantial impact on viewer perception of virtual environment. In this thesis, auditory-visual cross-modal interaction research findings have been investigated and adapted to graphics rendering purposes. The results from five psychophysical experiments, involving 233 participants, showed that, even in the realm of computer graphics, there is a strong relationship between vision and audition in both spatial and temporal domains. The first experiment, investigating the auditory-visual cross-modal interaction within spatial domain, showed that unrelated sound effects reduce perceived rendering quality threshold. In the following experiments, the effect of audio on temporal visual perception was investigated. The results obtained indicate that audio with certain beat rates can be used in order to reduce the amount of rendering required to achieve a perceptual high quality. Furthermore, introducing the sound effect of footsteps to walking animations increased the visual smoothness perception. These results suggest that for certain conditions the number of frames that need to be rendered each second can be reduced, saving valuable computation time, without the viewer being aware of this reduction. This is another step towards a comprehensive understanding of auditory-visual cross-modal interaction and its use in high-fidelity interactive multi-sensory virtual environments.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Martinez, Laura. "Auditory-visual intermodal discrimination in chimpanzees". 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/126577.

Testo completo
Abstract (sommario):
Kyoto University (京都大学)
0048
新制・課程博士
博士(理学)
甲第14990号
理博第3469号
新制||理||1508(附属図書館)
27440
UT51-2009-R714
京都大学大学院理学研究科生物科学専攻
(主査)教授 松沢 哲郎, 准教授 友永 雅己, 教授 髙井 正成
学位規則第4条第1項該当
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Columbus, Rebecca Foushee. "Auditory-Visual System Interactions: Perinatal Visual Experience Affects Auditory Learning and Memory in Bobwhite Quail Chicks (Colinus virginianus)". Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/29226.

Testo completo
Abstract (sommario):
Early perceptual learning capacity has been shown to correspond with the relative status of emergent sensory systems throughout prenatal and postnatal development. It has also been shown that young infants can learn perceptual information during perinatal development. However, the exact nature of the relationship between prenatal and postnatal perceptual development and the role of early experience on learning ability have yet to be examined. The present study examined how auditory learning capacity in bobwhite quail chicks is affected by the interrelationship between the developing auditory and visual systems in late prenatal/early postnatal development. Chicks were provided with auditory information during the period immediately prior to or the period following hatching. In addition, visual experience was either provided or attenuated during both the prenatal and postnatal periods. Findings revealed that chicks postnatally exposed to 10 min/hr of maternal auditory stimulation in lighted conditions required 72 hr exposure to the call in order to learn that bobwhite maternal call (Experiments 1A and 1B). Control chicks who experienced the prenatal egg-opening procedure demonstrated no naive preference for two individual variants of the bobwhite maternal assembly call (Experiment 2). However, embryos who received 10 min/hr of prenatal visual stimulation, or who were reared in prenatal darkness successfully learned a maternal call with only 24 hr of postnatal exposure (Experiments 3A and 3C). Embryos who received prenatal visual and postnatal darkened rearing conditions (a mismatch between prenatal and postnatal experience) showed deficits in postnatal auditory learning (Experiment 3B). Embryos who were exposed to 10 min/hr of prenatal maternal auditory stimulation and 10 min/hr of nonconcurrent visual stimulation remembered the maternal call into later ages of postnatal development than in previous studies when reared in lighted or darkened postnatal conditions (Experiments 4A and 4B). However, when all prenatal and postnatal visual experience were both removed from embryos' and chicks' environments, deficits in prenatal auditory learning and postnatal memory were observed (Experiment 4C). These results indicate that prenatal and postnatal learning in bobwhite quail occur differently, that mismatches in prenatal and postnatal experience interfere with postnatal auditory learning, and that prenatal learning and postnatal memory are affected by the amount of visual stimulation present within chicks' environmental milieu. In the broader scheme, these results provide further evidence that the auditory and visual systems are linked during early development and support an ecological perspective of learning and memory.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Andrews, Brandie. "Auditory and visual information facilitating speech integration". Connect to resource, 2007. http://hdl.handle.net/1811/25202.

Testo completo
Abstract (sommario):
Thesis (Honors)--Ohio State University, 2007.
Title from first page of PDF file. Document formatted into pages: contains 43 p.; also includes graphics. Includes bibliographical references (p. 27-28). Available online via Ohio State University's Knowledge Bank.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Tamosiunas, Matthew Joseph. "Auditory-visual integration of sine-wave speech". Connect to resource, 2007. http://hdl.handle.net/1811/25203.

Testo completo
Abstract (sommario):
Thesis (Honors)--Ohio State University, 2007.
Title from first page of PDF file. Document formatted into pages: contains 34 p.; also includes graphics. Includes bibliographical references (p. 26-27). Available online via Ohio State University's Knowledge Bank.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Klintfors, Eeva. "Emergence of words : Multisensory precursors of sound-meaning associations in infancy". Doctoral thesis, Stockholm : Department of Linguistics, Stockholm University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-7371.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Kirchner, Holle. "Visual auditory interstimulus contingency effects in saccade programming". [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=965164586.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Ver, Hulst Pamela. "Visual and auditory factors facilitating multimodal speech perception". Connect to resource, 2006. http://hdl.handle.net/1811/6629.

Testo completo
Abstract (sommario):
Thesis (Honors)--Ohio State University, 2006.
Title from first page of PDF file. Document formatted into pages: contains 35 p.; also includes graphics. Includes bibliographical references (p. 24-26). Available online via Ohio State University's Knowledge Bank.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Heuermann, Heike. "Spatial and temporal factors in visual auditory interaction". [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=967796601.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Hoffmann-Kuhnt, Matthias. "Visual and auditory vigilance in the bottlenosed dolphin". [S.l. : s.n.], 2003. http://www.diss.fu-berlin.de/2003/268/index.html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Persson, Viktor. "Crossmodal correspondences between visual, olfactory and auditory information". Thesis, Stockholms universitet, Psykologiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-58837.

Testo completo
Abstract (sommario):
Our senses take in a large amount of information, information that sometimes is congruent across sensory modalities. Crossmodal correspondences are the study of how this information across modalities is integrated by the brain, across which dimensions the correspondences exists, and how it affect us. In the present paper four experiments were conducted, in which potential crossmodal correspondences between audition, vision and olfaction were investigated. It was hypothesized that crossmodal correspondences between olfaction, vision and audition exist along different dimensions. The results showed significant correlations between olfaction and audition when volume varies, i.e., a high volume is associated to a high concentration of an odor, and a low volume is associated to a low concentration of an odor, and vice versa. Furthermore, existing correspondences between vision and audition is reconfirmed. In conclusion, the results provide support to the notion that crossmodal correspondences exists between all sensory modalities, although along different dimensions.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Lochner, Martin Jewell. "Auditory target identification in a visual search task". Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/755.

Testo completo
Abstract (sommario):
Previous research has shown that simultaneous auditory identification of the target in a visual search task can lead to more efficient (i. e. ?flatter?) search functions (Spivey et al. , 2001). Experiment 1 replicates the paradigm of Spivey et al. , providing subjects with auditory identification of the search target either before (Consecutive condition) or simultaneously with (Concurrent condition) the onset of the search task. RT x Set Size slopes in the Concurrent condition are approximately 1/2 as steep as those in the Consecutive condition. Experiment 2 employs a distractor ratio manipulation to test the notion that subjects are using the simultaneous auditory target identification to ?parse? the search set by colour, thus reducing the search set by 1/2. The results of Experiment 2 do not support the notion that subjects are parsing the search set by colour. Experiment 3 addresses the same question as Experiment 2, but obtains the desired distractor ratios by holding the amount of relevantly-coloured items constant while letting overall set size vary. Unlike Experiment 2, Experiment 3 supports the interpretation that subjects are using the auditory target identification to parse the search set.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Lee, Catherine. "Perception of synchrony between auditory and visual stimuli". Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6375.

Testo completo
Abstract (sommario):
The literature has fairly consistently reported a difference in how well humans perceive synchrony depending on the order of auditory and visual stimuli. When the auditory stimulus occurs first and the visual stimulus follows, subjects are more sensitive and so perceive asynchrony with smaller time delay between the stimuli. On the other hand, when the auditory follows the visual stimulus, the subjects are more tolerant and perceive stimuli with larger time delays as synchronous. Thresholds of synchrony perception in these two conditions are thus asymmetrical. The present study attempts to test the Lewkowicz Model, by which the asymmetrical thresholds are explained as a result of arrival-time differences between auditory and visual stimuli to the brain, such that visual stimulus takes longer in processing to be perceived versus auditory one. Reaction-times to these stimuli were measured to determine the arrival-time difference and plotted with synchrony perception. On the basis of Lewkowicz Model we predicted that reaction-time difference between the two stimuli correlate with subjective synchrony. The results did not support the Lewkowicz Model. The expected tendency of 30--40ms of subjective synchrony was not shown. The subjects took, in average, only 7.7ms to detect asynchrony when the auditory stimulus followed the visual stimulus. That the subjects did not tolerate greater temporal gap when the auditory followed versus when it preceded the visual stimulus was a very different result from majority of previous studies. Different factors in perceiving synchrony are discussed in this paper, as well as the application of the research in telecommunications.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Jones, Laura. "Neurocognitive signatures of auditory and visual sensory gating". Thesis, Anglia Ruskin University, 2016. http://arro.anglia.ac.uk/700996/.

Testo completo
Abstract (sommario):
The aim of this thesis was to investigate the neurophysiological phenomenon of auditory and visual sensory gating, and primarily, explore the notion of a cross-modal mechanism. The electrophysiological characterisation and associated cognitive functions of both visual and auditory sensory gating were examined to determine similarities and differences across the two input modalities. In order to explore this, three issues were addressed: 1) The latency and surface scalp location(s) of the maximal or most reliable sensory gating was identified; 2) the associated cognitive mechanism(s) were explored using 11 diverse tasks incorporating attentional inhibition; 3) the sensitivity of the gating mechanism was examined with regards to changes in stimulus form/location or changes in attentional demand. Despite limited consideration in the literature to date, this thesis reports evidence that sensory gating is a phenomenon that exists within the visual modality and moreover, can be reliably observed. Compared to standard auditory gating procedures, visual gating is found at a later latency and further back in the central-parietal or central-occipital electrode sites. Correlations between visual and auditory gating with latent inhibition and the continuous performance task suggests gating, independent of iii modality, may reflect the encoding of target and non-target stimuli/stimulus features alike, and the subsequent categorisation and inhibition of those deemed irrelevant. Additionally, a comparable limitation was observed for both modalities, with regards to sensitivity of sensory gating, with spatial features being processed as a priority over the perceptual stimulus features. In conclusion, the differences in latency and component of the observed gating presented in this thesis, indicates that visual and auditory sensory gating are not products of the same intra-cortical mechanism. Rather the gating observed in each modality is a functionally distinct mechanism that is qualitatively analogous across modalities.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Mishra, Jyoti. "Neural processes underlying an auditory-induced visual illusion". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3296864.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed Mar. 24, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Harrison, Neil Richard. "Behavioural and electrophysiological correlates of auditory-visual integration". Thesis, University of Liverpool, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.507171.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Waisman, Rogeria. "Paraphilias in males : visual and auditory CNV studies". Thesis, King's College London (University of London), 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419803.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Fixmer, Eric Norbert Charles. "Grouping of auditory and visual information in speech". Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612553.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Jones, Laura. "Neurocognitive signatures of auditory and visual sensory gating". Thesis, Anglia Ruskin University, 2016. https://arro.anglia.ac.uk/id/eprint/700996/1/Jones_2016.pdf.

Testo completo
Abstract (sommario):
The aim of this thesis was to investigate the neurophysiological phenomenon of auditory and visual sensory gating, and primarily, explore the notion of a cross-modal mechanism. The electrophysiological characterisation and associated cognitive functions of both visual and auditory sensory gating were examined to determine similarities and differences across the two input modalities. In order to explore this, three issues were addressed: 1) The latency and surface scalp location(s) of the maximal or most reliable sensory gating was identified; 2) the associated cognitive mechanism(s) were explored using 11 diverse tasks incorporating attentional inhibition; 3) the sensitivity of the gating mechanism was examined with regards to changes in stimulus form/location or changes in attentional demand. Despite limited consideration in the literature to date, this thesis reports evidence that sensory gating is a phenomenon that exists within the visual modality and moreover, can be reliably observed. Compared to standard auditory gating procedures, visual gating is found at a later latency and further back in the central-parietal or central-occipital electrode sites. Correlations between visual and auditory gating with latent inhibition and the continuous performance task suggests gating, independent of iii modality, may reflect the encoding of target and non-target stimuli/stimulus features alike, and the subsequent categorisation and inhibition of those deemed irrelevant. Additionally, a comparable limitation was observed for both modalities, with regards to sensitivity of sensory gating, with spatial features being processed as a priority over the perceptual stimulus features. In conclusion, the differences in latency and component of the observed gating presented in this thesis, indicates that visual and auditory sensory gating are not products of the same intra-cortical mechanism. Rather the gating observed in each modality is a functionally distinct mechanism that is qualitatively analogous across modalities.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Taranu, Mihaela. "Commonalities and differences in visual and auditory multistability". Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/11983.

Testo completo
Abstract (sommario):
Perceptual bi/multi-stability—the phenomenon in which perceptual awareness switches between alternative interpretations of a stimulus—can be elicited by a large range of stimuli. The phenomenon is explored in vision, audition, touch, and even olfaction. The degree to which perceptual switching across visual and auditory bi/multi-stable paradigms depends on common or separate mechanisms remains unanswered. This main question was addressed in the current work by using four ambiguous tasks that give rise to bi/multi-stability and which are thought to involve rivalry at different levels of cognitive processing: auditory streaming and ambiguous-structure-from-motion (low- level tasks), and verbal transformations and ambiguous figures (high-level tasks). It was also investigated if individual differences in executive function (inhibitory control and set-shifting), creativity and personality traits have common relationships with perceptual switching in adults and children. A series of five experiments (four studies) were conducted. In Study 1 (two experiments), perceptual switching behaviour of adult participants was examined in the four perceptual tasks mentioned above. In Experiment 1, participants reported higher switching rates for the ambiguous figure and verbal transformations than for ambiguous motion and auditory streaming. However, in Experiment 2 participants had a higher switching rate in verbal transformations than in auditory streaming, while the switching rates in the two visual tasks did not differ significantly. The correlations between visual and auditory switching rates were similarly inconclusive: in Experiment 1, no cross-modal correlations emerged, while in Experiment 2 there were correlations between ambiguous figure and verbal transformations and between ambiguous motion and verbal transformation. Furthermore, inhibitory control, set-shifting, and creativity correlated with perceptual ii switching rates in some of the perceptual tasks, although not in a consistent manner. In Study 2, the development of perceptual switching was investigated in children in the same four tasks used in Study 1. Findings showed that the number of switches increased with age in all four perceptual tasks, indicating general maturational developments. Executive functions and creativity were not associated with the ongoing perceptual switching, which was similar to what was found in adults. In Study 3, a neuroscientific perturbation approach was used to investigate whether the superior parietal cortex is causally involved in both visual and auditory multistability as a top-down mechanism. Transcranial magnetic stimulation on the anterior and posterior superior parietal cortex did not increase or decrease the median phase durations in response to the ambiguous motion and auditory streaming. These regions were not causally involved in either visual or auditory multistability. Perceptual switching across modalities correlated nevertheless, indicating common perceptual mechanisms. In Study 4, the effects of attentional control and instructions were further investigated in ambiguous motion and auditory streaming. There were strong correlations between perceptual switching in the two tasks, confirming that there are common mechanisms. However, the effects of voluntary attention did not explain the commonalities found. Possibly the commonalities found reflect similar functionalities at more low-level sensorial mechanisms. In conclusion, perceptual switching in vision and audition share common mechanisms. These commonalities do not seem to be due to the same neural underpinning in parietal cortex. Moreover, attentional control does not explain the commonalities found, indicating a more low-level common mechanism or functionality. Perceptual switching across all ages is task-specific, more than modality specific. No central influence of inhibitory control and creativity was constantly associated with perceptual switching regardless of task/modality, supporting the distributed mechanisms hypothesis.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Salem, Tawfiq. "Learning to Map the Visual and Auditory World". UKnowledge, 2019. https://uknowledge.uky.edu/cs_etds/86.

Testo completo
Abstract (sommario):
The appearance of the world varies dramatically not only from place to place but also from hour to hour and month to month. Billions of images that capture this complex relationship are uploaded to social-media websites every day and often are associated with precise time and location metadata. This rich source of data can be beneficial to improve our understanding of the globe. In this work, we propose a general framework that uses these publicly available images for constructing dense maps of different ground-level attributes from overhead imagery. In particular, we use well-defined probabilistic models and a weakly-supervised, multi-task training strategy to provide an estimate of the expected visual and auditory ground-level attributes consisting of the type of scenes, objects, and sounds a person can experience at a location. Through a large-scale evaluation on real data, we show that our learned models can be used for applications including mapping, image localization, image retrieval, and metadata verification.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Tuovinen, Antti-Pekka. "Object-oriented engineering of visual languages". Helsinki : University of Helsinki, 2002. http://ethesis.helsinki.fi/julkaisut/mat/tieto/vk/tuovinen/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Chan, Jason Seeho. "'Change deafness' : an auditory analogue of visual 'change blindness'?" Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403972.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Laird, Esther. "Voice recognition and auditory-visual integration in person recognition". Thesis, University of Sussex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487906.

Testo completo
Abstract (sommario):
The human ability to recognise a voice is important for social interaction and speech comprehension. In everyday recognitions, the voice can be encountered alone (e.g. over a telephone) or with a face, and ~e person being recognised can be familiar or unfamiliar (such as a witness choosing a perpetrator from a lineup). This thesis - presents 7 studies cov~ring each of these situations. The first paper presents 3 studies on recognition of unfamiliar voices when there is a change in emotional tone between learning and test phases. A tone change reduces recognition accuracy when there is no specific encoding strategy at the learning phase. Familiaris.ation at the learning phase reduces the tone change effect but concentrating on word content at the learning phase does not. The second paper presents 3 studies investigating the limitations of the face overshadowing effect (voice recognition is worse when the voice is learned with a face than if it is learned alone). Blurring faces made face recognition more qifficult but did not affect voice recognition. In experiment 2, participants learned a sentence repeated 3 times, either with the face changing on each repetition or staying the same. Face recognition accuracy was lower when there were 3 faces, but this did not affect voice recognition. In experiment 3, inverting faces' made face recognition more difficult but did not affect voice recognition. The third paper reports that episodic memory for a celebrity is improved when a face and voice are given compared to just a face. A model of person recognition is presented that builds on existing models (e.g. Burton, Bruce & Johnston, 1990; Belin, 2004). It accounts for unfamiliar and familiar voice recognition and the benefits and costs of auditory-visual integration.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Lewis, Richard Kirk. "Functional imaging studies of visual-auditory integration in man". Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1444973/.

Testo completo
Abstract (sommario):
This thesis investigates the central nervous system's ability to integrate visual and auditory information from the sensory environment into unified conscious perception. It develops the possibility that the principle of functional specialisation may be applicable in the multisensory domain. The first aim was to establish the neuroanatomical location at which visual and auditory stimuli are integrated in sensory perception. The second was to investigate the neural correlates of visual-auditory synchronicity, which would be expected to play a vital role in establishing which visual and auditory stimuli should be perceptually integrated. Four functional Magnetic Resonance Imaging studies identified brain areas specialised for: the integration of dynamic visual and auditory cues derived from the same everyday environmental events (Experiment 1), discriminating relative synchronicity between dynamic, cyclic, abstract visual and auditory stimuli (Experiment 2 & 3) and the aesthetic evaluation of visually and acoustically perceived art (Experiment 4). Experiment 1 provided evidence to suggest that the posterior temporo-parietal junction may be an important site of crossmodal integration. Experiment 2 revealed for the first time significant activation of the right anterior frontal operculum (aFO) when visual and auditory stimuli cycled asynchronously. Experiment 3 confirmed and developed this observation as the right aFO was activated only during crossmodal (visual-auditory), but not intramodal (visual-visual, auditory-auditory) asynchrony. Experiment 3 also demonstrated activation of the amygdala bilaterally during crossmodal synchrony. Experiment 4 revealed the neural correlates of supramodal, contemplative, aesthetic evaluation within the medial fronto-polar cortex. Activity at this locus varied parametrically according to the degree of subjective aesthetic beauty, for both visual art and musical extracts. The most robust finding of this thesis is that activity in the right aFO increases when concurrently perceived visual and auditory sensory stimuli deviate from crossmodal synchrony, which may veto the crossmodal integration of unrelated stimuli into unified conscious perception.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Trounson, Ronald Harris. "Development of the UC Auditory-visual Matrix Sentence Test". Thesis, University of Canterbury. Communication Disorders, 2012. http://hdl.handle.net/10092/10348.

Testo completo
Abstract (sommario):
Matrix Sentence Tests consist of syntactically fixed but semantically unpredictable sentences each composed of 5 words (name, verb, quantity, adjective, object). Test sentences are generated by choosing 1 of 10 alternatives for each word to form sentences such as
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Patching, Geoffrey R. "The role of attention in auditory and visual interaction". Thesis, University of York, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323686.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Braga, Rodrigo. "Evidence for separable networks for auditory and visual attention". Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/25118.

Testo completo
Abstract (sommario):
The research reported in this thesis investigated the neural systems involved in auditory attention, and in particular how these may differ from those recruited for visual attention. One current leading theory of attentional control postulates that a single frontoparietal network (the 'dorsal attention network' or DAN) subserves top-down attention to all sensory modalities. However, there is an abundance of published evidence which contradicts this claim (which is discussed herein). This thesis reports the results of three studies. In the first study, I investigated auditory attention whilst controlling for crossmodal and executive factors which may have confounded the interpretation of previous studies. In the second study, I investigated whether another crossmodal factor, the control of eye movements, may also have contributed to the controversy regarding auditory attention. Lastly, I investigated whether some regions of the brain contain multiple overlapping signals, a finding which could explain how a cortical region might display 'amodal' properties, and participate in multiple cognitive functions simultaneously. As a whole, this thesis provides evidence that the DAN is a predominantly visuospatial attention network whose recruitment during auditory attention reflects indirect crossmodal mechanisms rather than the direct modulation of auditory information. In addition, this thesis provides evidence that a candidate frontotemporal network, which links executive regions of the prefrontal cortex with temporal auditory association areas, subserves top-down attention to non-spatial auditory features.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Milne, Alice E. "Auditory and visual sequence learning in humans and macaques". Thesis, University of Newcastle upon Tyne, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.750390.

Testo completo
Abstract (sommario):
Over the last 20 years there have been dramatic advancements in our understanding of language; particularly regarding the ontogeny, phylogeny and neurobiology of syntactic abilities. Yet, how these language processes evolved remains unclear. The ability to extract information about the regularities present in sequentially presented stimuli has been related to both the acquisition and use of natural syntax. As a result, structured sequence learning is considered to be one of the general cognitive abilities from which language could have evolved. If sequence learning does represent an early precursor to human syntactic processing, there should be evidence of comparable sequence learning abilities in other primates with whom humans share a common ancestor. To this end, this thesis explores the sequence learning abilities of humans, and their evolutionary relatives, the macaque. Artificial grammars (AG) were used to create sequences that emulate some of the order-based relationships found between words in a sentence. The first study used a nonadjacent AG where the first syllable in a triplet is predicted the third syllable. Electroencephalography (EEG) was used to show that the macaque brain potentials elicited by a violation to the nonadjacent relationships were more similar to those previously found in infants, than adults tested on the same paradigm. Together the results indicated that both infants and macaques extracted the sequencing relationships more automatically than adults who have already acquired language. The subsequent studies tested how humans and macaques respond to identically structured sequences of either auditor)’ or visual stimuli. Behavioural results showed that sequence learning in both humans and macaques occurs in a very similar manner across the modalities. Subsequent imaging work also found correspondences across the two species, finding that similar frontal and parietal regions were associate with both auditory’ and visual sequence learning. Although, results in the macaque are preliminary and presented as a case study. Together, the studies provide evidence that human sequencing abilities stem from an evolutionary’ conserved capacity to extract regularities from sequentially presented stimuli and that this process is similarly represented in both humans and macaques.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Shepard, Kyle. "Visual and auditory characteristics of talkers in multimodal integration". Connect to resource, 2009. http://hdl.handle.net/1811/37229.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Alsalmi, Jehan. "Auditory-visual integration during the perception of spoken Arabic". Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/13320/.

Testo completo
Abstract (sommario):
This thesis aimed to investigate the effect of visual speech cues on auditory-visual integration during speech perception in Arabic. Four experiments were conducted two of which were cross linguistic studies using Arabic and English listeners. To compare the influence of visual speech in Arabic and English listeners chapter 3 investigated the use of visual components of auditory-visual stimuli in native versus non-native speech using the McGurk effect. The experiment suggested that Arabic listeners’ speech perception was influenced by visual components of speech to a lesser degree compared to English listeners. Furthermore, auditory and visual assimilation was observed for non-native speech cues. Additionally when the visual cue was an emphatic phoneme the Arabic listeners incorporated the emphatic visual cue in their McGurk response. Chapter 4, investigated whether the lower McGurk effect response in Arabic listeners found in chapter 3 was due to a bottom-up mechanism of visual processing speed. Chapter 4, using auditory-visual temporal asynchronous conditions, concluded that the differences in McGurk response percentage was not due to bottom-up mechanism of visual processing speed. This led to the question of whether the difference in auditory-visual integration of speech could be due to more ambiguous visual cues in Arabic compared to English. To explore this question it was first necessary to identify visemes in Arabic. Chapter 5 identified 13 viseme categories in Arabic, some emphatic visemes were visually distinct from their non-emphatic counterparts and a greater number of phonemes within the guttural viseme category were found compared to English. Chapter 6 evaluated the visual speech influence across the 13 viseme categories in Arabic measured by the McGurk effect. It was concluded that the predictive power of visual cues and the contrast between visual and auditory speech components will lead to an increase in the McGurk response percentage in Arabic.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia