Rozprawy doktorskie na temat „Speech perception”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Speech perception.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Speech perception”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Sohoglu, Ediz. "Perception of degraded speech". Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608225.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Barrett, S. "Prototypes in speech perception". Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596421.

Pełny tekst źródła
Streszczenie:
My Ph.D examines the role of the prototype in speech perception. Although existing research claims that the speech prototype acts as a perceptual attractor for other stimuli in the same speech-sound category, I argue instead that the prototype actually has two roles in the perceptual system - as both a perceptual attractor (i.e. showing reduced discrimination in the surrounding perceptual space) and a perceptual repellor (i.e. showing enhanced discrimination) and that the choice of role depends entirely on the needs of the listener and the needs of the system that he/she is part of. This claim is substantiated through my work on non-speech prototypes in which I found that although professional musicians treat music prototypes as perceptual repellors, non-musicians treat them as perceptual attractors. Because a professional musician 'needs' to be able to recognise how precisely 'in-tune' a musical sound is, enhanced discrimination is needed around their music prototypes. In contrast, a non-musician typically listens to music only to be entertained and can therefore show reduced discrimination around a music prototype. This is true also for the average individual listening to speech who does not 'need' to register how qualitatively good an incoming speech-sound is as long as it is recognisable. Under normal circumstances, therefore, speech-sound prototypes function as perceptual attractors. I believe that the role of the prototype is determined by the amount of attention paid to it by the listener. This has formed the basis for my "A&R Theory" which states that prototypes (in both speech and non-speech) are organised along a perceptual function continuum bounded by regions of minimal and maximal attention, and that the position of the prototype along the continuum at a given point in time is directly dependent on the amount of attention paid to it. At the point of maximal attention (i.e. the professional musician) the prototype acts as a perceptual repellor. At the point of minimal attention (i.e. the non-musician, average listener), the prototype acts as a perceptual attractor. Support for this theory comes from experiments showing that that the position of the prototype along the perceptual function continuum can shift when the attention of the listener is deliberately manipulated. A&R Theory has formed the basis for my model of speech perception development called the Prototype Conversion Model which claims that infants' speech-sound prototypes are initially repellors since infants are able to focus most of their attention onto the sounds of speech, but that these repellors gradually convert to attractors when the infants' attention is distracted by higher-order linguistic functions such as learning their first words. Other work on prototypes in this dissertation has argued that prototypes are highly context-sensitive and that they do not depend initially on exposure to a native language, as has often been claimed in the literature.
Style APA, Harvard, Vancouver, ISO itp.
3

Grancharov, Volodya. "Human perception in speech processing". Doctoral thesis, Stockholm : Sound and Image Processing Laboratory, School of Electrical Engineering, Royal Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4032.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Rombach, Frederik [Verfasser], Mariacristina [Akademischer Betreuer] Musso i Cornelius [Akademischer Betreuer] Weiller. "Gender differences in speech perception". Freiburg : Universität, 2018. http://d-nb.info/1171261721/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Peters, C. J. "Speech enhancement and vowel perception". Thesis, University of Reading, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379266.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Verwey, Johan. "Speech perception in virtual environments". Master's thesis, University of Cape Town, 2005. http://hdl.handle.net/11427/6371.

Pełny tekst źródła
Streszczenie:
Includes bibliographical references (p. 60-64).
Many virtual environments like interactive computer games, educational software or training simulations make use of speech to convey important information to the user. These applications typically present a combination of background music, sound effects, ambient sounds and dialog simultaneously to create a rich auditory environment. Since interactive virtual environments allow users to roam freely among different sound producing objects, sound designers do not always have exact control over what sounds the user will perceive at any given time. This dissertation investigates factors that influence the perception of speech in virtual environments under adverse listening conditions.
Style APA, Harvard, Vancouver, ISO itp.
7

Szycik, Gregor R. "Audiovisual integration during speech perception". Göttingen Sierke, 2008. http://d-nb.info/991223330/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Howard, John Graham. "Temporal aspects of auditory-visual speech and non-speech perception". Thesis, University of Reading, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553127.

Pełny tekst źródła
Streszczenie:
This thesis concentrates on the temporal aspects of the auditory-visual integratory perceptual experience described above. It is organized in two parts, a literature review, followed by an experimentation section. After a brief introduction (Chapter One), Chapter Two begins by considering the evolution of the earliest biological structures to exploit information in the acoustic and optic environments. The second part of the chapter proposes that the auditory-visual integratory experience might be a by-product of the earliest emergence of spoken language. Chapter Three focuses on human auditory and visual neural structures. It traces the auditory and visual systems of the modem human brain through the complex neuroanatomical forms that construct their pathways, through to where they finally integrate into the high-level multi-sensory association areas. Chapter Four identifies two distinct investigative schools that have each reported on the auditory-visual integratory experience. We consider their different experimental methodologies and a number of architectural and information processing models that have sought to emulate human sensory, cognitive and perceptual processing, and ask how far they can accommodate a bi-sensory integratory processing. Chapter Five draws upon empirical data to support the importance of the temporal dimension of sensory forms in information processing, especially bimodal processing. It considers the implications of different modalities processing differently discontinuous afferent information within different time-frames. It concludes with a discussion of a number of models of biological clocks that have been proposed as essential temporal regulators of human sensory experience. In Part Two, the experiments are presented. Chapter Six provides the general methodology, and in the following Chapters a series of four experiments is reported upon. The experiments follow a logical sequence, each being built upon information either revealed or confirmed in results previously reported. Experiments One, Three, and Four required a radical reinterpretation of the 'fast-detection' paradigm developed for use in signal detection theory. This enables the work of two discrete investigative schools in auditory-visual processing to be brought together. The use of this modified paradigm within an appropriately designed methodology produces experimental results that speak directly to both the 'speech versus non-speech' debate and also to gender studies.
Style APA, Harvard, Vancouver, ISO itp.
9

Alghamdi, Najwa. "Visual speech enhancement and its application in speech perception training". Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/19667/.

Pełny tekst źródła
Streszczenie:
This thesis investigates methods for visual speech enhancement to support auditory and audiovisual speech perception. Normal-hearing non-native listeners receiving cochlear implant (CI) simulated speech are used as ‘proxy’ listeners for CI users, a proposed user group who could benefit from such enhancement methods in speech perception training. Both CI users and non-native listeners share similarities with regards to audiovisual speech perception, including increased sensitivity to visual speech cues. Two enhancement methods are proposed: (i) an appearance based method, which modifies the appearance of a talker’s lips using colour and luminance blending to apply a ‘lipstick effect’ to increase the saliency of mouth shapes; and (ii) a kinematics based method, which amplifies the kinematics of the talker’s mouth to create the effect of more pronounced speech (an ‘exaggeration effect’). The application that is used to test the enhancements is speech perception training, or audiovisual training, which can be used to improve listening skills. An audiovisual training framework is presented which structures the evaluation of the effectiveness of these methods. It is used in two studies. The first study, which evaluates the effectiveness of the lipstick effect, found a significant improvement in audiovisual and auditory perception. The second study, which evaluates the effectiveness of the exaggeration effect, found improvement in the audiovisual perception of a number of phoneme classes; no evidence was found of improvements in the subsequent auditory perception, as audiovisual recalibration to visually exaggerated speech may have impeded learning when used in the audiovisual training. The thesis also investigates an example of kinematics based enhancement which is observed in Lombard speech, by studying the behaviour of visual Lombard phonemes in different contexts. Due to the lack of suitable datasets for this analysis, the thesis presents a novel audiovisual Lombard speech dataset recorded under high SNR, which offers two, fixed head-pose, synchronised views of each talker in the dataset.
Style APA, Harvard, Vancouver, ISO itp.
10

Shuster, Linda Irene. "Speech perception and speech production : between and within modal adaptation /". The Ohio State University, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=osu148726754698296.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Fallon, Marianne Catherine. "Children's perception of speech in noise". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/NQ63773.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Hockley, Neil Spencer. "The development of audiovisual speech perception". Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22526.

Pełny tekst źródła
Streszczenie:
The developmental process of audiovisual speech perception was examined in this experiment using the McGurk paradigm (McGurk & MacDonald, 1976), in which a visual recording of a person saying a particular syllable is synchronized with the auditory presentation of another syllable. Previous studies have shown that audiovisual speech perception in adults and older children is very influenced by the visual speech information but children under five are influenced by the auditory input almost exclusively (McGurk & MacDonald, 1976; Massaro, 1984; and Massaro, Thompson, Barron, & Laren, 1986). In this investigation 46 children aged between 4:7 and 12:4, and 15 adults were presented with conflicting audiovisual syllables made according to the McGurk paradigm. The results indicated that the influence of auditory information decreased with age, while the influence of visual information increased with age. In addition, an adult-like response pattern was observed in only half of the children in the oldest child subject group (10-12 years old) suggesting that the integration of auditory and visual speech information continues to develop beyond the age of twelve.
Style APA, Harvard, Vancouver, ISO itp.
13

Ellis, Errol Mark. "Mechanisms in phonetic tactile speech perception". Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338072.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Poeppel, David. "The neural basis of speech perception". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11138.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Carbonell, Kathy M. "Individual Differences in Degraded Speech Perception". Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/604866.

Pełny tekst źródła
Streszczenie:
One of the lasting concerns in audiology is the unexplained individual differences in speech perception performance even for individuals with similar audiograms. One proposal is that there are cognitive/perceptual individual differences underlying this vulnerability and that these differences are present in normal hearing (NH) individuals but do not reveal themselves in studies that use clear speech produced in quiet (because of a ceiling effect). However, previous studies have failed to uncover cognitive/perceptual variables that explain much of the variance in NH performance on more challenging degraded speech tasks. This lack of strong correlations may be due to either examining the wrong measures (e.g., working memory capacity) or to there being no reliable differences in degraded speech performance in NH listeners (i.e., variability in performance is due to measurement noise). The proposed project has 3 aims; the first, is to establish whether there are reliable individual differences in degraded speech performance for NH listeners that are sustained both across degradation types (speech in noise, compressed speech, noise-vocoded speech) and across multiple testing sessions. The second aim is to establish whether there are reliable differences in NH listeners' ability to adapt their phonetic categories based on short-term statistics both across tasks and across sessions; and finally, to determine whether performance on degraded speech perception tasks are correlated with performance on phonetic adaptability tasks, thus establishing a possible explanatory variable for individual differences in speech perception for NH and hearing impaired listeners.
Style APA, Harvard, Vancouver, ISO itp.
16

Li, Guoping. "Speech perception in a sparse domain". Thesis, University of Southampton, 2008. https://eprints.soton.ac.uk/188321/.

Pełny tekst źródła
Streszczenie:
Environmental statistics are known to be important factors shaping our perceptual system. The visual and auditory systems have evolved to be effcient for processing natural images or speech. The com- mon characteristics between natural images and speech are that they are both highly structured, therefore having much redundancy. Our perceptual system may use redundancy reduction and sparse coding strategies to deal with complex stimuli every day. Both redundancy reduction and sparse coding theory emphasise the importance of high order statistics signals. This thesis includes psycho-acoustical experiments designed to inves- tigate how higher order statistics affect our speech perception. Sparse- ness can be defined by the fourth order statistics, kurtosis, and it is hypothesised that greater kurtosis should be reflected by better speech recognition performance in noise. Based on a corpus of speech mate- rial, kurtosis was found to be significantly correlated to the glimps- ing area of noisy speech, an established measure that predicts speech recognition. Kurtosis was also found to be a good predictor of speech recognition and an algorithm based on increasing kurtosis was also found to improve speech recognition score in noise. The listening experiment for the first time showed that higher order statistics are important for speech perception in noise. It is known the hearing impaired listeners have diffculty understand- ing speech in noise. Increasing kurtosis of noisy speech may be par- ticularly helpful for them to achieve better performance. Currently, neither hearing aids nor cochlear implants help hearing impaired users greatly in adverse listening enviroments, partly due to having a re- duced dynamic range of hearing. Thus there is an information bot- tleneck, whereby these devices must transform acoustical sounds with a large dynamic range into the smaller range of hearing impaired lis- teners. The limited dynamic range problem can be thought of as a communication channel with limited capacity. Information could be more effciently encoded for such a communication channel if redun- dant information could be reduced. For cochlear implant users, un- wanted channel interaction could also contribute lower speech recog- nition scores in noisy conditions. This thesis proposes a solution to these problems for cochlear im- plant users by reducing signal redundancy and making signals more sparse. A novel speech processing algorithm, SPARSE, was devel- oped and implemented. This algorithm aims to reduce redundant information and transform signals input into more sparse stimulation sequences. It is hypothesised that sparse firing patterns of neurons will be achieved, which should be more biological efficient based on sparse coding theory. Listening experiments were conducted with ten cochlear implant users who listened to speech signals in modulated and speech babble noises, either using the conventional coding strat- egy or the new SPARSE algorithm. Results showed that the SPARSE algorithm can help them to improve speech understanding in noise, particularly for those with low baseline performance. It is concluded that signal processing algorithms for cochlear implants, and possibly also for hearing aids, that increase signal sparseness may deliver ben- efits for speech recognition in noise. A patent based on the algorithm has been applied for.
Style APA, Harvard, Vancouver, ISO itp.
17

Cox, Ethan Andrew. "Second language perception of accented speech". Diss., The University of Arizona, 2005. http://hdl.handle.net/10150/282887.

Pełny tekst źródła
Streszczenie:
The present study addresses a core issue in the study of speech perception, the question of how stable phonological representations are accessed from an inherently variable speech signal. In particular, the research investigates the perception of accented English speech by native and non-native listeners. It is known from previous research that foreign-accented speech is harder for native listeners to process than native-accented speech. The reason for this lies in not only qualities of the input (deviation from native production norms, for example) but also in qualities of the listener. Specifically, listeners' speech perception systems are tuned from an early age to pay attention to useful distinctions in the language environment but to attenuate differences which are not useful. This quality of the listeners' speech processing system suggests that in addition to being native speakers of a language or languages, we are also native listeners. However, what is a liability for native listeners (non-native input) may be a benefit for non-native listeners. When the foreign accent is derived from a single language shared between the speaker and the listener, application of native-language processing strategies to the accented input may result in more efficient processing of the input. The experiments in this dissertation address this possibility. In an experiment involving Dutch listeners processing Dutch-accented and American English-accented sentence materials, a reaction time advantage was observed for the mutually-accented materials. Experiments testing the main hypothesis with native Spanish-listening participants showed a different pattern of results. These participants, who had more experience with English overall that the Dutch participants, performed similarly to native-listening controls in displaying faster verification times for native accented materials than mutually-accented materials. These experiments lead to the conclusion that native-like listening, as assessed by the sentence verification paradigm employed in these experiments, can be achieved by non-native listeners. In particular, non-native listeners with little experience processing spoken English benefit from hearing input produced in a matching accent. Non-native listeners with sufficiently more experience processing spoken English, however, perform similar to native listeners, displaying an advantage for native accented input.
Style APA, Harvard, Vancouver, ISO itp.
18

Mak, Cheuk-yan Charin. "Effects of speech and noise on Cantonese speech intelligibility". Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37989790.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Mak, Cheuk-yan Charin, i 麥芍欣. "Effects of speech and noise on Cantonese speech intelligibility". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37989790.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Hayes, Rachel Anne. "Speech perception in infancy : infants' perception of rhyming and alliterative syllables". Thesis, University of Exeter, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248118.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Ukrainetz, Teresa A. "The effect of coarticulation on the role of transitions in vowel perception". Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26655.

Pełny tekst źródła
Streszczenie:
The present study examines the effect of context on the use of transitions as cues to vowel perception. Thirty V₁CV₂CV₁ utterances were recorded, with V₁ being one of the three vowels /a,i,u/, and V₂ one of ten English vowels (/ i , I, el, E, ae, a,^,Ou,U,u/). After removal of the outer vowels (V₁), three sets of stimuli were created from the CV₂C parts: (1) unmodified controls (CO); (2) V₂ steady-state only (SS); and (3) transitions only (TR). Twenty subjects were asked to identify V₂. Subjects and speaker were matched for dialect and all subjects had some phonetics training. Results showed significant differences across conditions and contexts. Scores for SS stimuli, for all contexts, were as high as for CO stimuli. Performance on the TR stimuli was as good as on the other two conditions for two of the contexts. However, for the TR condition--/a/ context, performance was considerably worse than for any other combination of conditions and contexts. Possible reasons for this are discussed, and the need for testing of other vowel contexts is emphasised. It is concluded that, in some V₁CV₂CV₁ contexts, transitions can provide information about vowel identity on a level equal to steady-state alone, or to the combined information provided by both transitions and steady-states. This effect, however, is not uniform across contexts. For at least one context, transitions alone are not sufficient to cue vowel identity at a level comparable to steady-state or combined information. This lack of uniformity suggests that the role of transitions varies with the type of vowel context present, and conclusions about general usefulness await systematic testing of a number of vowel contexts.
Medicine, Faculty of
Audiology and Speech Sciences, School of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
22

Chua, W. W. "Speech recognition predictability of a Cantonese speech intelligibility index". Click to view the E-thesis via HKUTO, 2004. http://sunzi.lib.hku.hk/hkuto/record/B30509737.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Sánchez, García Carolina 1984. "Cross-modal predictive mechanisms during speech perception". Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/293266.

Pełny tekst źródła
Streszczenie:
El objetivo de esta tesis es investigar los mecanismos predictivos que operan de forma online durante la percepción audiovisual de una lengua. La idea de que existen mecanismos predictivos que actúan a distintos niveles lingüísticos (sintáctico, semántico, fonológico...) durante la percepción de una lengua ha sido ampliamente apoyada recientemente por literatura. Sin embargo, casi toda la literatura está relacionada con los fenómenos predictivos dentro de la misma modalidad sensorial (visual o auditiva). En esta tesis, investigamos si la predicción online durante la percepción del habla puede ocurrir a través de distintas modalidades sensoriales. Los resultados de este trabajo aportan evidencias de que la información visual articulatoria puede ser utilizada para predecir la subsiguiente información auditiva durante el procesamiento de una lengua. Además, los efectos de la predicción intermodal se observaron únicamente en la lengua nativa de los participantes pero no en una lengua con la que no estaban familiarizados. Esto nos lleva a concluir que representaciones fonológicas bien establecidas son esenciales para que ocurra una predicción online a través de modalidades. El último estudio de esta tesis reveló, mediante el uso de ERPs, que la información visual articulatoria puede ejercer una influencia más allá de las etapas fonológicas. En concreto, la saliencia visual de la primera sílaba de una palabra influye durante la etapa de selección léxica, interaccionando con los procesos semánticos durante la comprensión de frases. Los resultados obtenidos en esta tesis demuestran la existencia de mecanismos predictivos a través de distintas modalidades sensoriales, basados en información articulatoria visual. Estos mecanismos actúan de forma online, haciendo uso de la información multisensorial disponible durante la percepción de una lengua, para optimizar su procesamiento.
The present dissertation addresses the predictive mechanisms operating online during audiovisual speech perception. The idea that prediction mechanisms operate during the perception of speech at several linguistic levels (i.e. syntactic, semantic, phonological….) has received increasing support in recent literature. Yet, most evidence concerns prediction phenomena within a single sensory modality, i.e., visual, or auditory. In this thesis, I explore if online prediction during speech perception can occur across sensory modalities. The results of this work provide evidence that visual articulatory information can be used to predict the subsequent auditory input during speech processing. In addition, evidence for cross-modal prediction was observed only in the observer’s native language but not in unfamiliar languages. This led to the conclusion that well established phonological representations are paramount for online cross-modal prediction to take place. The last study of this thesis, using ERPs, revealed that visual articulatory information can have an influence beyond phonological stages. In particular, the visual saliency of word onsets has an influence at the stage of lexical selection, interacting with the semantic processes during sentence comprehension. By demonstrating the existence of online cross-modal predictive mechanisms based on articulatory visual information, our results shed new lights on how multisensory cues are used to speed up speech processing.
Style APA, Harvard, Vancouver, ISO itp.
24

Anderson, Elizabeth. "Audiovisual speech perception with degraded auditory cues". Connect to resource, 2006. http://hdl.handle.net/1811/6532.

Pełny tekst źródła
Streszczenie:
Thesis (Honors)--Ohio State University, 2006.
Title from first page of PDF file. Document formatted into pages: contains 35 p.; also includes graphics. Includes bibliographical references (p. 28-29). Available online via Ohio State University's Knowledge Bank.
Style APA, Harvard, Vancouver, ISO itp.
25

Greuel, Alison Jeanne. "Sensorimotor influences on speech perception in infancy". Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/50782.

Pełny tekst źródła
Streszczenie:
The multisensory nature of speech, and in particular, the modulatory influence of one’s own articulators during speech processing, is well established in adults. However, the origins of the sensorimotor influence on auditory speech perception are largely unknown, and require the examination of a population in which a link between speech perception and speech production is not well-defined; by studying preverbal infant speech perception, such early links can be characterized. Across three experimental chapters, I provide evidence that articulatory information selectively affects the perception of speech sounds in preverbal infants, using both neuroimaging and behavioral measures. In Chapter 2, I use a looking time procedure to show that in 6-month-old infants, articulatory information can impede the perception of a consonant contrast when the related articulator is selectively impaired. In Chapter 3, I use the high-amplitude suck (HAS) procedure to show that neonates are able to discriminate and exhibit memory for the vowels /u/ and /i/; however, the information from the infants’ articulators (a rounded lip shape) seems to only marginally affect behavior during the learning of these vowel sounds. In Chapter 4, I co-register HAS with a neuroimaging technique – Near Infrared Spectroscopy (NIRS) – and identify underlying neural networks in newborn infants that are sensitive to the sensorimotor-auditory match, in that the vowel which matches the lip shape (/u/) is processed differently than the vowel that is not related to the lip shape (/i/). Together, the experiments reported in this dissertation suggest that even before infants gain control over their articulators and speak their first words, their sensorimotor systems are interacting with their perceptual systems as they process auditory speech information.
Arts, Faculty of
Psychology, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
26

Danielson, Donald Kyle. "Visual influences on speech perception in infancy". Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58980.

Pełny tekst źródła
Streszczenie:
The perception of speech involves the integration of both heard and seen signals. Increasing evidence indicates that even young infants are sensitive to the correspondence between these sensory signals, and adding visual information to the auditory speech signal can change infants’ perception. Nonetheless, important questions remain regarding the nature of and limits to early audiovisual speech perception. In the first set of experiments in this thesis, I use a novel eyetracking method to investigate whether English-learning six-, nine-, and 11-month-olds detect content correspondence in auditory and visual information when perceiving non-native speech. Six- and nine-month-olds, prior to and in the midst of perceptual attunement, switch their face-scanning patterns in response to incongruent speech, evidence that infants at these ages detect audiovisual incongruence even in non-native speech. I then probe whether this familiarization, to congruent or incongruent speech, affects infants’ perception such that auditory-only phonetic discrimination of the non-native sounds is changed. I find that familiarization to incongruent speech changes—but does not entirely disrupt—six-month-olds’ auditory discrimination. Nine- and 11-month-olds, in the midst and at the end of perceptual attunement, do not discriminate the non-native sounds regardless of familiarization condition. In the second set of experiments, I test how temporal information and phonetic content information may both contribute to an infant’s use of auditory and visual information in the perception of speech. I familiarize six-month-olds to audiovisual Hindi speech sounds in which the auditory and visual signals of the speech are incongruent in content and, in two conditions, are also temporally asynchronous. I hypothesize that, when presented with temporally synchronous, incongruent stimuli, infants rely on either the auditory or the visual information in the signal and use that information to categorize the speech event. Further, I predict that the addition of a temporal offset to this incongruent speech changes infants’ use of the auditory and visual information. Although the main results of this latter study are inconclusive, post-hoc analyses suggest that when visual information is presented first or synchronously with auditory information, as is the case in the environment, infants exhibit a moderate matching preference for auditory information at test.
Arts, Faculty of
Psychology, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
27

Kendall, Melanie J. "Speech perception with multi-channel cochlear implants". Thesis, University of Nottingham, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267060.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Garrihy, G. "Neural network simulation of dynamic speech perception". Thesis, University of Essex, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317930.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Warren, P. "The temporal organisation and perception of speech". Thesis, University of Cambridge, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.355053.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Barrett, Jenna. "Perception of Spectrally-Degraded, Foreign-Accented Speech". Ohio University Honors Tutorial College / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1619012518297988.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Fagelson, Marc A. "Gap Discrimination and Speech Perception in Noise". Digital Commons @ East Tennessee State University, 1999. https://dc.etsu.edu/etsu-works/1583.

Pełny tekst źródła
Streszczenie:
The relation between discrimination of silent gaps and speech‐in‐noise perception was measured in 20 normal‐hearing listeners using speech‐shaped noise as both the gap markers and the noise source for speech testing. In the gap discrimination experiment, subjects compared silent gaps marked by 60 dB SPL 250‐ms noise bursts to standards of either 5, 10, 20, 50, 100, or 200 ms. The gap results were most similar to those reported by Abel [S. M. Abel, J. Acoust. Soc. Am. 52, 519–524 (1972)] as ΔT/T decreased non‐monotonically with increased gap length. In a second experiment, the California Consonant Test (CCT) was administered at 50 dB HL via CD in three conditions: quiet, +10 S/N, and 0 S/N. Results from both experiments were correlated and the association between ΔT/T and CCT scores was generally negative. Listeners who discriminated the gaps with greater acuity typically had higher speech scores. The relation was strongest for the smaller gap standards at each S/N, or when performance for any gap duration was compared to the CCT results obtained in quiet.
Style APA, Harvard, Vancouver, ISO itp.
32

Kubitskey, Katherine M. "Experience and Perception: How Experience Affects Perception of Naturalness Change in Speakers with Dysarthria". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1438186598.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Makashay, Matthew Joel. "Individual Differences in Speech and Non-Speech Perception of Frequency and Duration". The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1047489733.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Ostroff, Wendy Louise. "The Perceptual Draw of Prosody: Infant-Directed Speech within the Context of Declining Nonnative Speech Perception". Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/37029.

Pełny tekst źródła
Streszczenie:
Infant speech perception develops within the context of specific language experience. While there is a corpus of empirical evidence concerning infants' perception of linguistic and prosodic information in speech, few studies have explored the interaction of the two. The present investigation was designed to combine what is known about infants' perception of nonnative phonemes (linguistic information) with what is known about infant preferences for ID speech (prosodic information). In particular, the purpose of this series of studies was to examine infant preferences for ID speech within the timeline of the phonemic perceptual reorganization that occurs at the end of the first postnatal year. In Experiment 1, 20 Native-English 10- to 11-month-old infants were tested in an infant-controlled preference procedure for attention to ID speech in their native language versus ID speech in a foreign language. The results showed that infants significantly preferred the ID-native speech. In Experiment 2, the preferred prosodic information (ID speech) was separated from the preferred linguistic information (native speech), as a means of discerning the relative perceptual draw of these types of speech characteristics. Specifically, a second group of 20 10- to 11-month-old infants was tested for a preference between ID speech in a foreign language and AD speech in their native language. In this case the infants exhibited a significant preference for ID-foreign speech, suggesting that prosodic information in speech has more perceptual weight than linguistic information. This pattern of results suggests that infants attend to linguistic-level information by 10- to 11-months of age, and that ID speech may play a role in the native-language tuning process by directing infants' attention to linguistic specifics in speech.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
35

Chua, W. W., i 蔡蕙慧. "Speech recognition predictability of a Cantonese speech intelligibility index". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30509737.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Lee, Keebbum state. "Korean-English Bilinguals’ perception of noise-vocoded speech". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1562004544370682.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Kopyar, Beth Ann. "Intensity discrimination abilities of infants and adults : implications for underlying processes /". Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8263.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Schmitz, Judith 1984. "On the relationship between native and non-native speech perception and speech production". Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/456304.

Pełny tekst źródła
Streszczenie:
Models of speech perception differ in the nature of the relationship between speech perception and production. Whether speech perception and production processes are based on a common representations ̶ the articulatory gesture ̶ or speech perception fundamentally operates on the acoustic code is highly debated. In three experimental studies, we investigated the nature of the relationship between speech perception and production. In the first study we found an active role of the speech production system in speech perception, even when listening to unfamiliar phonemes. In the second study we found no influence of a somatosensory manipulation applied to an articulator in passive speech perception. In the third study we showed that speech perception and production abilities are tightly related across phonological processes (sub-lexical and lexical) and participants’ languages (native ̶ L1 ̶and second language ̶ L2 ̶). The results suggest that speech perception and production are intimately linked.
Los modelos de la percepción del habla difieren sobre la naturaleza de la relación entre la percepción y la producción del habla. El debate se centra en si ambos procesos comparten como representación básica los gestos articulatorios o bien si la percepción del habla se basa en el código auditivo. Investigamos la naturaleza de la relación entre la percepción y producción del habla en tres estudios experimentales. El primer estudio mostró que el sistema de producción del habla participa activamente en la percepción. El segundo estudio no reveló influencias en la percepción pasiva del habla de una manipulación somatosensorial aplicada en un articulador. El tercer estudio mostró una fuerte relación entre las habilidades de la percepción y producción del habla en varios procesos fonológicos (sub-léxicos y léxicos) y lenguas conocidas por los participantes (primera y segunda lenguas). Los resultados sugieren que la percepción y producción del habla están íntimamente relacionadas.
Style APA, Harvard, Vancouver, ISO itp.
39

Schaefer, Martina Christina Marion. "The interaction between speech perception and speech production: implications for speakers with dysarthria". Thesis, University of Canterbury. Communication Disorders, 2013. http://hdl.handle.net/10092/8610.

Pełny tekst źródła
Streszczenie:
The purpose of the research presented here was to systematically investigate the role of speech perception on speech production in speakers of different ages and those with PD and hypokinetic dysarthria. For this, the experimental designs of auditory perturbation and mimicry were chosen. The initial research phase established that the magnitude of compensation to auditory vowel perturbation was reduced in 54 speakers of New Zealand English (NZE) when compared to previous studies conducted with speakers of American (AE) and Canadian English (CE). A number of factors were studied to determine possible predictors of compensation and distinguish between potential changes associated with ageing. However, no predictors of compensation were found for the overall group. Post-hoc analyses established an increased variability in response patterns in NZE when compared to previous studies of AE and CE. Subsequent follow-up analyses focused on the response-dependent categories of (1) big compensators, (2) compensators, (3) big followers, and (4) followers. Linear mixed-effect modelling revealed that in big compensators, the magnitude of compensation was greater in speakers who exhibited larger F1 baseline standard deviation and greater F1 vowel distances of HEAD relative to HEED and HAD. F1 baseline standard deviation was found to have a similar predictive value for the group of compensators. No predictors of compensation were found for the other two subgroups. Phase two was set up as a continuation of phase one and examined whether a subset of 16 speakers classified as big compensators adapted to auditory vowel perturbation. Linear mixed-effect modelling revealed that in the absence of auditory feedback alterations, big compensators maintained their revised speech motor commands for a short period of time until a process of de-adaptation was initiated. No predictors of adaptation were found for the group. Due to the unexpected results from the first two research phases indicating a dominant weighting of somatosensory feedback in NZE compared to auditory-perceptual influences, a different experimental paradigm was selected for phase three - mimicry. The purpose of this study was to determine whether eight speakers with PD and dysarthria and eight age-matched healthy controls (HC) are able to effectively integrate speech perception and speech production when attempting to match an acoustic target. Results revealed that all speakers were able to modify their speech production to approximate the model speaker but the acoustic dimensions of their speech did not move significantly closer to the target over the three mimicry attempts. Although speakers with moderate levels of dysarthria exhibited greater acoustic distances (except for the dimension of pitch variation), neither the perceptual nor the acoustic analyses found significant differences in mimicry behaviour across the two groups. Overall, these findings were considered preliminary evidence that speech perception and speech production can at least to some extent be effectively integrated to induce error-correction mechanisms and subsequent speech motor learning in these speakers with PD and dysarthria.
Style APA, Harvard, Vancouver, ISO itp.
40

Barker, Jon. "The relationship between speech perception and auditory organisation : studies with spectrally reduced speech". Thesis, University of Sheffield, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286581.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Uebler, Ulla. "Multilingual speech recognition /". Berlin : Logos Verlag, 2000. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=009117880&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Jett, Brandi. "The role of coarticulation in speech-on-speech recognition". Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1554498179209764.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Ver, Hulst Pamela. "Visual and auditory factors facilitating multimodal speech perception". Connect to resource, 2006. http://hdl.handle.net/1811/6629.

Pełny tekst źródła
Streszczenie:
Thesis (Honors)--Ohio State University, 2006.
Title from first page of PDF file. Document formatted into pages: contains 35 p.; also includes graphics. Includes bibliographical references (p. 24-26). Available online via Ohio State University's Knowledge Bank.
Style APA, Harvard, Vancouver, ISO itp.
44

Blomberg, Rina. "CORTICAL PHASE SYNCHRONISATION MEDIATES NATURAL FACE-SPEECH PERCEPTION". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-122825.

Pełny tekst źródła
Streszczenie:
It is a challenging task for researchers to determine how the brain solves multisensory perception, and the neural mechanisms involved remain subject to theoretical conjecture.  According to a hypothesised cortical model for natural audiovisual stimulation, phase synchronised communications between participating brain regions play a mechanistic role in natural audiovisual perception.  The purpose of this study was to test the hypothesis by investigating oscillatory dynamics from ongoing EEG recordings whilst participants passively viewed ecologically realistic face-speech interactions in film.  Lagged-phase synchronisation measures were computed for conditions of eye-closed rest (REST), speech-only (auditory-only, A), face-only (visual-only, V) and face-speech (audio-visual, AV) stimulation. Statistical contrasts examined AV > REST, AV > A, AV > V and AV-REST > sum(A,V)-REST effects.  Results indicated that cross-communications between the frontal lobes, intraparietal associative areas and primary auditory and occipital cortices are specifically enhanced during natural face-speech perception and that phase synchronisation mediates the functional exchange of information associated with face-speech processing between both sensory and associative regions in both hemispheres.  Furthermore, phase synchronisation between cortical regions was modulated in parallel within multiple frequency bands.
Style APA, Harvard, Vancouver, ISO itp.
45

Desjardins, Renée Nicole. "Audiovisual speech perception in 4-month-old infants". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq25041.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Gruber, Michael. "Dyslexics' phonological processing in relation to speech perception". Doctoral thesis, Umeå : Univ, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-113.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Pinard, Minola. "Non-linguistic versus linguistic processes in speech perception". Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=72057.

Pełny tekst źródła
Streszczenie:
Four studies were conducted in which three sets of tasks were devised which tapped in a standard format, progressively refined, nonlinguistic versus linguistic processes in speech processing. The third set of tasks gave the clearest results. In it, male and female franco-phone subjects of different ages and of varying degree of knowledge of English were tested. Three sets of consonant contrasts were used. A dichotomization into two separate processes was possible by finding expected differential patterns of development for the two tasks; we were able to postulate that the two processes were non-linguistic versus linguistic by finding expected specific patterns of development, specific patterns of sex by age similarities and differences, differential patterns of correlations between degree of bilingualism and consonant contrasts, and unexpectedly a different pattern of performance on one contrast, all according to task. The results are discussed mainly in relation to other experiments on "the phonetic mode".
Style APA, Harvard, Vancouver, ISO itp.
48

Brajot, François-Xavier. "The perception of speech intensity in Parkinson's disease". Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123154.

Pełny tekst źródła
Streszczenie:
Advances in Parkinson's disease research are uncovering a complex pathology that extends well beyond basal ganglia and dopamine-related structures, one that impacts sensory processing and sensorimotor integration as much as it does motor planning and execution, with implications for the functional consequences of the disorder. The current research project is motivated by evidence that perceptual, alongside classical motor deficits, may be ascribed to the clinical presentation of the hypokinetic dysarthria of Parkinson's disease. Three studies were conducted to assess the roles of auditory, somatosensory and sensorimotor integration processes involved in speakers' perception of the volume of their own speech. The combination of loudness magnitude estimation and masking of sensory feedback in the first two studies reveals differences in psychophysical loudness functions that suggest that speech loudness perception deficits in Parkinson's disease are the result of problems with the organization and integration of multi-sensory feedback due to inadequate motor planning. A third, electroencephalographic study supports this conclusion with evidence of atypical cortical event-related potentials among parkinsonian participants, indicating defective preparatory and corrective neural processes otherwise undetectable in the psychophysical experiments. Based on the findings from this series of experiments, the self-perception of speech intensity is attributed to motorically specified parameters of vocal effort. The interpretation of associated sensory feedback is determined by those parameters. The perceptual deficit associated with hypokinetic dysarthria is thus proposed to result directly from deficits in generating speech movements, with concomitant effects on the subsequent identification, organization and interpretation of reafferent information.
Les progrès de la recherche sur la maladie de Parkinson dévoilent une pathologie complexe qui s'étend bien au-delà des ganglions de la base et autres structures dopaminergiques, impacte les processus sensoriels et l'intégration sensorimotrice autant que la planification et l'exécution motrice, avec des implications pour les conséquences fonctionnelles de la maladie. Le projet de recherche actuel est motivé par l'observation que certains troubles perceptuels, parallèles aux troubles moteurs classiques, font partie de la présentation clinique de la dysarthrie hypokinétique de la maladie de Parkinson. Trois études ont été entreprises afin d'évaluer les rôles des processus auditifs, somatosensoriels, et d'intégration sensorimotrice impliqués dans l'auto-perception du volume de la voix. Les analyses d'estimation d'ampleur vocalique et de masquage de la rétroaction sensorielle des deux premières études révèlent des différences dans les fonctions psychophysiques du volume. Les résultats suggèrent que les déficits de la perception parkinsonienne sont une conséquence des problèmes d'organisation et d'intégration de la rétroaction multisensorielle reliée au mouvement. La troisième étude, électro-encéphalographique, renforce cette hypothèse en démontrant la présence de potentiels évoqués corticaux atypiques parmi les participants parkinsoniens qui sont associés à des déficiences dans les processus préparatoires et correctifs par ailleurs indétectables avec les expériences psychophysiques. D'après les résultats de cette série d'expériences, l'auto-perception du volume de la voix est attribuée à des paramètres d'effort vocal spécifiés au niveau moteur. L'interprétation de toute rétroaction sensorielle associée se détermine selon ces paramètres. Le déficit perceptuel lié à la dysarthrie hypokinétique peut ainsi être interprété comme l'effet direct de déficiences dans la génération des mouvements de la parole, agissant par la suite sur l'identification, l'organisation et l'interprétation subséquente des informations ré-afférentes.
Style APA, Harvard, Vancouver, ISO itp.
49

Roberts, M. "Analyses of adaptation and contrast in speech perception". Thesis, University of Nottingham, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356039.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Whybrow, Jonathan James. "Experiments relating to the tactile perception of speech". Thesis, University of Exeter, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269739.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii