Dissertations / Theses on the topic 'Visual and auditory languages'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Visual and auditory languages.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Spencer, Dawna. "Visual and auditory metalinguistic methods for Spanish second language acquisition." Connect online, 2008. http://library2.up.edu/theses/2008_spencerd.pdf.
Full textErdener, Vahit Dogu, University of Western Sydney, of Arts Education and Social Sciences College, and School of Psychology. "The effect of auditory, visual and orthographic information on second language acquisition." THESIS_CAESS_PSY_Erdener_V.xml, 2002. http://handle.uws.edu.au:8081/1959.7/685.
Full textMaster of Arts (Hons)
Erdener, Vahit Doğu. "The effect of auditory, visual and orthographic information on second language acquisition /." View thesis View thesis, 2002. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030408.114825/index.html.
Full text"A thesis submitted in partial fulfillment of the requirements for the degree of Masters of Arts (Honours), MARCS Auditory Laboratories & School of Psychology, University of Western Sydney, May 2002" Bibliography : leaves 83-93.
Nácar, García Loreto 1988. "Language acquisition in bilingual infants : Early language discrimination in the auditory and visual domains." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/511361.
Full textLa adquisición del lenguaje es una pieza fundamental en el desarrollo cognitivo durante el primer año de vida. Una diferencia fundamental entre los bebés que crecen en ambientes monolingües y bilingües es que estos últimos necesitan discriminar entre dos sistemas lingüísticos desde muy temprano en la vida. Para poder aprender dos idiomas, los bebés bilingües tienen que percibir las regularidades de cada uno de sus idiomas y a la vez mantenerlos separados. En esta tesis exploramos las diferencias entre bebés monolingües y bilingües tanto en sus capacidades de discriminación tempranas, como en las estrategias que desarrolla cada grupo como consecuencia de la adaptación a su entorno lingüístico. En el segundo capítulo, examinamos la capacidad de los bebés bilingües y monolingües a los 4 meses de edad para discriminar entre la lengua nativa/dominante de otra extranjera en el dominio auditivo. Nuestros resultados muestran que, en este contexto, los bebés monolingües y bilingües presentan diferentes señales auditivas cuando escuchan su lengua nativa. Los resultados señalan que discriminar la lengua nativa representa un coste cognitivo mayor para los bebés bilingües que para los monolingües cuando sólo sólo disponen de información auditiva. En el capítulo 3, exploramos las habilidades de los bebés monolingües y bilingües a los 8 meses de edad para discriminar lenguas en el dominio visual. Aquí, mostramos a bebés que nunca han sido expuestos a lengua de signos, videos de dos lenguas de signos diferentes y medimos sus habilidades discriminatorias usando un paradigma de habituación. Los resultados muestran que a esta edad sólo los bebés bilingües son capaces de hacer la distinción y apuntan que para ello aprovechan la información proveniente de la cara de la signante.
Greenwood, Toni Elspeth. "Auditory language comprehension, and sequential interference in working memory following sustained visual attention /." Title page, contents and abstract only, 2001. http://web4.library.adelaide.edu.au/theses/09ARPS/09arpsg8166.pdf.
Full textWroblewski, Marcin. "Developmental predictors of auditory-visual integration of speech in reverberation and noise." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/6017.
Full textRybarczyk, Aubrey Rachel. "Weighting of Visual and Auditory Stimuli in Children with Autism Spectrum Disorders." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1459977848.
Full textBosworth, Rain G. "Psychophysical investigation of visual perception in deaf and hearing adults : effects of auditory deprivation and sign language experience /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC IP addresses, 2001. http://wwwlib.umi.com/cr/ucsd/fullcit?p3015850.
Full textPénicaud, Sidonie. "Insights about age of language exposure and brain development : a voxel-based morphometry approach." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=111591.
Full textLima, Fernanda Leitão de Castro Nunes de [UNESP]. "Julgamento perceptivo-auditivo e perceptivo-visual das produções gradientes de fricativas coronais surdas." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/154302.
Full textApproved for entry into archive by Satie Tagara (satie@marilia.unesp.br) on 2018-06-19T14:10:24Z (GMT) No. of bitstreams: 1 lima_flcn_me_mar.pdf: 1310670 bytes, checksum: ab7f761d3d1be439f987de5d800203cd (MD5)
Made available in DSpace on 2018-06-19T14:10:24Z (GMT). No. of bitstreams: 1 lima_flcn_me_mar.pdf: 1310670 bytes, checksum: ab7f761d3d1be439f987de5d800203cd (MD5) Previous issue date: 2018-05-22
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Objetivo: O objetivo do presente estudo foi analisar a porcentagem de respostas dos juízes no julgamento perceptivo-auditivo dos áudios e no julgamento perceptivo-visual de imagens ultrassonográficas na detecção de produções gradientes das fricativas coronais surdas. Ainda, verificar se há diferenças entre essas formas de julgamento e se elas se correlacionam. Métodos: Foram selecionados 20 juízes com conhecimento sobre o processo de produção da fala, além da classificação e descrição fonética dos diferentes fonemas do Português Brasileiro (PB). Os estímulos julgados foram coletados de um banco de dados, arquivos de áudio e vídeo (imagens ultrassonográficas) relativos à produção de palavras “sapo” e “chave”, de 11 crianças falantes do PB, na faixa etária de 6 a 12 anos de idade (9 meninos e 2 meninas), com produção de fala atípica. Foi realizada uma codificação prévia dos arquivos coletados. Após instrução prévia, os juízes deveriam escolher, imediatamente à apresentação de um estímulo, uma dentre três opções dispostas na tela do computador.O procedimento experimental consistiu no julgamento dos arquivos de áudio e julgamento das imagens ultrassonográficas, executado pelo software PERCEVAL.No julgamento dos arquivos de áudio as opções eram: produção correta, incorreta ou gradiente, enquanto no julgamento das imagens ultrassonográficasas opções eram: produção de [s], produção de [∫] ou produção indiferenciada.O tempo de apresentação, o modo aleatorizado de seleção dos estímulos e o tempo de reação foram controlados automaticamente pelo software PERCEVAL. Os dados foram submetidos à análise estatística. Resultados: O julgamento de imagens propiciou uma maior identificação dos estímulos gradientes (137 estímulos) e um menor tempo de reação na realização da tarefa (média=1073,12 ms) comparativamente ao julgamento perceptivo-auditivo (80 estímulos, tempo de reação médio=3126,26 ms), ambos estatisticamente significante (p<0,00). O teste de correlação de Spearman não mostrou significância estatística para porcentagem de respostas, nem para o tempo de reação. Conclusão: O uso das imagens ultrassonográficas no julgamento é o método mais sensível para a detecção da produção gradiente na produção de fala, podendo ser utilizado como método complementar do julgamento perceptivo-auditivo na análise de fala.
Purpose: The purpose of this study was to analyze the percentage of judges' answers in the auditory-perceptual judgment of the audios and in the visual-perceptual judgment of ultrasound images in the detection of gradient productions of the voiceless coronal fricatives. Also, to verify whether there are differences between these forms of judgment and whether they correlate. Methods: 20 judges with knowledge about the speech production process, besides the phonetic classification and description of the different Brazilian Portuguese (BP) phonemes were selected. The judged stimuli were collected from a database, audio and video files (ultrasound images) related to the production of "sapo”(frog) and "chave" (key) words, of 11 BP speakers children aged from 6 to 12 years old (9 boys and 2 girls), with atypical speech production. A previous encoding of the collected files was performed. After previous instruction, the judges should choose, immediately the presentation of a stimulus, one of three options arranged on the computer screen. The experimental procedure consisted in the judgment of the audio files and judgment of the ultrasound images, executed by the PERCEVAL software. In the judgment of the audio files the options were: correct, incorrect or gradient production, while in the judgment of the ultrasound images the options were: production of [s], production of [∫] or undifferentiated production. The presentation time, the randomized mode of selection of the stimuli and the reaction time were controlled automatically by PERCEVAL software. The data were submitted to statistical analysis. Results: The judgment of images provided a greater identification of the gradient stimuli (137 stimuli) and a shorter response time (mean = 1073.12 ms) compared to the auditory-perceptual judgment (80 stimuli, mean reaction time = 3126.26 ms), both statistically significant (p <0.00). Spearman's correlation test did not show statistical significance for percentage of responses, nor for reaction time. Conclusion: The use of ultrasound images in the judgment is the most sensitive method for the detection of gradient production in speech production, and can be used as a complementary method of auditory-perceptual judgment in the speech analysis.
Malapetsa, Christina. "Stroop tasks with visual and auditory stimuli : How different combinations of spoken words, written words, images and natural sounds affect reaction times." Thesis, Stockholms universitet, Institutionen för lingvistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-185057.
Full textTabanlioglu, Selime. "The Relationship Between Learning Styles And Language Learning Strategies Of Pre-intermediate Eap Students." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1014034/index.pdf.
Full text#8217
learning style and strategy preferences. A total of 60 students were asked to complete two questionnaires. One was used to identify students&
#8217
perceptual learning style preferences and the other was used to identify students&
#8217
learning strategies. In addition, think aloud protocols were held to determine the cognitive and metacognitive strategies students used while reading. The data analysis of the first questionnaire revealed that students&
#8217
major learning style preferences were auditory learning and individual learning. Furthermore, significant difference was found in the preference of tactile learning between males and females. The analysis of the second questionnaire revealed that cognitive strategies were favoured the most. No significant difference was found in the preferences of learning strategies between males and females. The analysis with respect to the relationship between learning styles and strategies revealed that &
#8226
visual styles had a significant relation with affective strategies
&
#8226
auditory styles had significant relationships with memory, cognitive, affective, and social strategies
&
#8226
there was a significant relationship between the individual learning style and compensation strategies. &
#8226
none of the learning styles had a significant relationship with metacognitive strategies. The think aloud protocols revealed that students used various cognitive and metacognitive strategies.
Schneiders, Julia A. [Verfasser], and Axel [Akademischer Betreuer] Mecklinger. "Visual and auditory vocabulary acquisition in learning Chinese as a second language : the impact of modality-specific working memory training / Julia A. Schneiders. Betreuer: Axel Mecklinger." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2012. http://d-nb.info/1052221815/34.
Full textMartin, Maria da Graça Morais. "Ressonância magnética funcional em indivíduos normais: base de dados do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/5/5151/tde-25062009-103809/.
Full textIntroduction: Functional magnetic resonance imaging has had a great impact on neuroscience, but its clinical applicability is still small. One of the main reasons is the lack of populational databases to support clinical decision. The aim of this work was to constitute a local normal database, representative of the patients from the Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (HC-FMUSP). Methods: The sample included 64 normal subjects who, at some point, accompanied patients from the HCFMUSP. They all performed motor, somatosensory, language, audiovisual and memory paradigms in a 1,5 T magnet. Demographic, neuropsychological and behavioral data were collected. Scanner quality control was also verified. Data was analyzed through XBAM software on individual and group basis, and for behavioral correlation. Results: The sample had a variable demographic distribution. Group analysis showed results in agreement with the literature. The motor paradigm elicited positive BOLD effect in the pre and postcentral gyri, extending to premotor and parietal regions, supplementary motor area, secondary somatosensory areas, basal ganglia and thalamus contralateral to the hand in question, and ipsilateral cerebellum. Group analysis of the hand somatosensory paradigm showed pre and postcentral gyri, basal ganglia and thalamus contralateral to the stimulated hand, ipsilateral cerebellum and bilateral secondary somatosensory areas. The group analysis of the somatosensory paradigm of the face showed pre and postcentral gyri, parietal cortex, premotor areas, inferior-posterior temporal cortex and secondary somatosensory areas bilaterally. Language paradigms showed positive BOLD effect in the inferior frontal gyrus and insula bilaterally, bigger on the left, left middle frontal gyrus, anterior cingulate, supplementary motor area, right cerebellum, cerebellar vermis, left basal ganglia and thalamus, and in particular, overt verbal fluency with presentation of different letters also showed the left parietal lobe. The audiovisual paradigm group analysis showed positive BOLD effect in the occipital and parietal cortex and cerebellum bilaterally during the visual condition, and bilateral temporal with left frontal and parietal extension during the auditory condition. Finally, working memory task showed activation in the occipital cortex, cerebellum, middle frontal gyri, parietal association cortex and mesial frontal region bilaterally, with right predominance. On individual basis we detected a multitude of brain areas in each paradigm with great variability, and those with the higher frequency ( 85%) were: left precentral gyrus (95%) and superior right cerebellum (87%) during the right hand movement; right precentral gyrus (88%) during the left hand movement; left postcentral gyrus (88%) for the somatosensory stimulus of the right hand; right postcentral gyrus (89%) for the somatosensory stimulus of the left hand; right (90%) and left (88%) lingual gyri during the visual stimulus; and right (93%) and left (91%) middle temporal gyrus for the auditory stimulus. Working memory and verbal fluency had no region with a frequency above 80%. Conclusions: The patterns of cerebral activations obtained in group analysis are in agreement with the literature. Individual analysis showed a higher frequency of positive BOLD effect in the primary and sensory cortices. The data collected during this work constitute a database that can be used to support clinical decision.
Wilkie, Sonia. "Auditory manipulation of visual perception." View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/39802.
Full textThesis accompanied by CD-ROM with demonstration of possible creative applications. A thesis presented to the University of Western Sydney, College of Arts, MARCS Auditory Laboratories, in fulfilment of the requirements for the degree of Master of Arts (Honours). Includes bibliographies. Thesis minus demonstration CD-ROM also available online at: http://handle.uws.edu.au:8081/1959.7/39849.
Zhao, Hang Ph D. Massachusetts Institute of Technology. "Visual and auditory scene parsing." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122101.
Full textThesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 121-132).
Scene parsing is a fundamental topic in computer vision and computational audition, where people develop computational approaches to achieve human perceptual system's ability in understanding scenes, e.g. group visual regions of an image into objects and segregate sound components in a noisy environment. This thesis investigates fully-supervised and self-supervised machine learning approaches to parse visual and auditory signals, including images, videos, and audios. Visual scene parsing refers to densely grouping and labeling of image regions into object concepts. First I build the MIT scene parsing benchmark based on a large scale, densely annotated dataset ADE20K. This benchmark, together with the state-of-the-art models we open source, offers a powerful tool for the research community to solve semantic and instance segmentation tasks. Then I investigate the challenge of parsing a large number of object categories in the wild. An open vocabulary scene parsing model which combines a convolutional neural network with a structured knowledge graph is proposed to address the challenge. Auditory scene parsing refers to recognizing and decomposing sound components in complex auditory environments. I propose a general audio-visual self-supervised learning framework that learns from a large amount of unlabeled internet videos. The learning process discovers the natural synchronization of vision and sounds without human annotation. The learned model achieves the capability to localize sound sources in videos and separate them from mixture. Furthermore, I demonstrate that motion cues in videos are tightly associated with sounds, which help in solving sound localization and separation problems.
by Hang Zhao.
Ph. D. in Mechanical Engineering and Computation
Ph.D.inMechanicalEngineeringandComputation Massachusetts Institute of Technology, Department of Mechanical Engineering
Lee, Chung-sze Eunice. "Auditory, visual and auditory-visual contributions to the Cantonese-speaking hearing-impaired adolescents' recognition of consonants." Click to view the E-thesis via HKUTO, 1999. http://sunzi.lib.hku.hk/hkuto/record/B3621002X.
Full text"A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, May 14, 1999." Also available in print.
Storms, Russell L. "Auditory-visual cross-modal perception phenomena." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA355474.
Full textDissertation supervisor(s): Michael J. Zyda. "September 1998." Includes bibliographical references (p. 207-222). Also Available online.
Saliba, Anthony John. "Auditory-visual integration in sound localisation." Thesis, University of Essex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249979.
Full textHulusić, Vedad. "Auditory-visual interaction in computer graphics." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/47727/.
Full textMartinez, Laura. "Auditory-visual intermodal discrimination in chimpanzees." 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/126577.
Full text0048
新制・課程博士
博士(理学)
甲第14990号
理博第3469号
新制||理||1508(附属図書館)
27440
UT51-2009-R714
京都大学大学院理学研究科生物科学専攻
(主査)教授 松沢 哲郎, 准教授 友永 雅己, 教授 髙井 正成
学位規則第4条第1項該当
Columbus, Rebecca Foushee. "Auditory-Visual System Interactions: Perinatal Visual Experience Affects Auditory Learning and Memory in Bobwhite Quail Chicks (Colinus virginianus)." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/29226.
Full textPh. D.
Andrews, Brandie. "Auditory and visual information facilitating speech integration." Connect to resource, 2007. http://hdl.handle.net/1811/25202.
Full textTitle from first page of PDF file. Document formatted into pages: contains 43 p.; also includes graphics. Includes bibliographical references (p. 27-28). Available online via Ohio State University's Knowledge Bank.
Tamosiunas, Matthew Joseph. "Auditory-visual integration of sine-wave speech." Connect to resource, 2007. http://hdl.handle.net/1811/25203.
Full textTitle from first page of PDF file. Document formatted into pages: contains 34 p.; also includes graphics. Includes bibliographical references (p. 26-27). Available online via Ohio State University's Knowledge Bank.
Klintfors, Eeva. "Emergence of words : Multisensory precursors of sound-meaning associations in infancy." Doctoral thesis, Stockholm : Department of Linguistics, Stockholm University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-7371.
Full textKirchner, Holle. "Visual auditory interstimulus contingency effects in saccade programming." [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=965164586.
Full textVer, Hulst Pamela. "Visual and auditory factors facilitating multimodal speech perception." Connect to resource, 2006. http://hdl.handle.net/1811/6629.
Full textTitle from first page of PDF file. Document formatted into pages: contains 35 p.; also includes graphics. Includes bibliographical references (p. 24-26). Available online via Ohio State University's Knowledge Bank.
Heuermann, Heike. "Spatial and temporal factors in visual auditory interaction." [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=967796601.
Full textHoffmann-Kuhnt, Matthias. "Visual and auditory vigilance in the bottlenosed dolphin." [S.l. : s.n.], 2003. http://www.diss.fu-berlin.de/2003/268/index.html.
Full textPersson, Viktor. "Crossmodal correspondences between visual, olfactory and auditory information." Thesis, Stockholms universitet, Psykologiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-58837.
Full textLochner, Martin Jewell. "Auditory target identification in a visual search task." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/755.
Full textLee, Catherine. "Perception of synchrony between auditory and visual stimuli." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6375.
Full textJones, Laura. "Neurocognitive signatures of auditory and visual sensory gating." Thesis, Anglia Ruskin University, 2016. http://arro.anglia.ac.uk/700996/.
Full textMishra, Jyoti. "Neural processes underlying an auditory-induced visual illusion." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3296864.
Full textTitle from first page of PDF file (viewed Mar. 24, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references.
Harrison, Neil Richard. "Behavioural and electrophysiological correlates of auditory-visual integration." Thesis, University of Liverpool, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.507171.
Full textWaisman, Rogeria. "Paraphilias in males : visual and auditory CNV studies." Thesis, King's College London (University of London), 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419803.
Full textFixmer, Eric Norbert Charles. "Grouping of auditory and visual information in speech." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612553.
Full textJones, Laura. "Neurocognitive signatures of auditory and visual sensory gating." Thesis, Anglia Ruskin University, 2016. https://arro.anglia.ac.uk/id/eprint/700996/1/Jones_2016.pdf.
Full textTaranu, Mihaela. "Commonalities and differences in visual and auditory multistability." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/11983.
Full textSalem, Tawfiq. "Learning to Map the Visual and Auditory World." UKnowledge, 2019. https://uknowledge.uky.edu/cs_etds/86.
Full textTuovinen, Antti-Pekka. "Object-oriented engineering of visual languages." Helsinki : University of Helsinki, 2002. http://ethesis.helsinki.fi/julkaisut/mat/tieto/vk/tuovinen/.
Full textChan, Jason Seeho. "'Change deafness' : an auditory analogue of visual 'change blindness'?" Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403972.
Full textLaird, Esther. "Voice recognition and auditory-visual integration in person recognition." Thesis, University of Sussex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487906.
Full textLewis, Richard Kirk. "Functional imaging studies of visual-auditory integration in man." Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1444973/.
Full textTrounson, Ronald Harris. "Development of the UC Auditory-visual Matrix Sentence Test." Thesis, University of Canterbury. Communication Disorders, 2012. http://hdl.handle.net/10092/10348.
Full textPatching, Geoffrey R. "The role of attention in auditory and visual interaction." Thesis, University of York, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323686.
Full textBraga, Rodrigo. "Evidence for separable networks for auditory and visual attention." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/25118.
Full textMilne, Alice E. "Auditory and visual sequence learning in humans and macaques." Thesis, University of Newcastle upon Tyne, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.750390.
Full textShepard, Kyle. "Visual and auditory characteristics of talkers in multimodal integration." Connect to resource, 2009. http://hdl.handle.net/1811/37229.
Full textAlsalmi, Jehan. "Auditory-visual integration during the perception of spoken Arabic." Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/13320/.
Full text