Rozprawy doktorskie na temat „Non-speech”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Non-speech.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Non-speech”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Howard, John Graham. "Temporal aspects of auditory-visual speech and non-speech perception". Thesis, University of Reading, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553127.

Pełny tekst źródła
Streszczenie:
This thesis concentrates on the temporal aspects of the auditory-visual integratory perceptual experience described above. It is organized in two parts, a literature review, followed by an experimentation section. After a brief introduction (Chapter One), Chapter Two begins by considering the evolution of the earliest biological structures to exploit information in the acoustic and optic environments. The second part of the chapter proposes that the auditory-visual integratory experience might be a by-product of the earliest emergence of spoken language. Chapter Three focuses on human auditory and visual neural structures. It traces the auditory and visual systems of the modem human brain through the complex neuroanatomical forms that construct their pathways, through to where they finally integrate into the high-level multi-sensory association areas. Chapter Four identifies two distinct investigative schools that have each reported on the auditory-visual integratory experience. We consider their different experimental methodologies and a number of architectural and information processing models that have sought to emulate human sensory, cognitive and perceptual processing, and ask how far they can accommodate a bi-sensory integratory processing. Chapter Five draws upon empirical data to support the importance of the temporal dimension of sensory forms in information processing, especially bimodal processing. It considers the implications of different modalities processing differently discontinuous afferent information within different time-frames. It concludes with a discussion of a number of models of biological clocks that have been proposed as essential temporal regulators of human sensory experience. In Part Two, the experiments are presented. Chapter Six provides the general methodology, and in the following Chapters a series of four experiments is reported upon. The experiments follow a logical sequence, each being built upon information either revealed or confirmed in results previously reported. Experiments One, Three, and Four required a radical reinterpretation of the 'fast-detection' paradigm developed for use in signal detection theory. This enables the work of two discrete investigative schools in auditory-visual processing to be brought together. The use of this modified paradigm within an appropriately designed methodology produces experimental results that speak directly to both the 'speech versus non-speech' debate and also to gender studies.
Style APA, Harvard, Vancouver, ISO itp.
2

Stevens, D. A. "Non-linear prediction for speech processing". Thesis, Swansea University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.639110.

Pełny tekst źródła
Streszczenie:
For over 20 years linear prediction has been one of the most widely used methods for analysing speech signals. Linear predictors have been used to model the vocal tract in all areas of speech processing from speech recognition to speech synthesis. However, Teager showed as early as 1980 by measuring the flow within the vocal tract during the pronunciation of a vowel sound, that the vocal tract is a non-linear system. As such the standard linear predictors are unable to model all the vocal tract information available in the speech signal. This work looks at replacing or complementing the standard linear models with non-linear ones in order to improve the modelling of the vocal tract. Several different methods of both generating and implementing non-linear models of the vocal tract are assessed to see how much improvement in prediction can be achieved by using non-linear models, either in place of, or complementing, the standard linear models. Two basic approaches to non-linear prediction have been used. The first of these is to configure a multi-layered perceptron (MLP) as a non-linear predictor and then to train the MLP to predict the speech signal. The second method is known as a split function approach as it effectively splits the overall predictor function into smaller sub-functions each of which requires a less complex predictor function than the whole. This second method uses a classification stage to determine what type of speech is present and then uses a separate predictor for each of the classifications. Initial results using a single MLP predictor proved ineffective, returning gains of 0.1 to 0.3 dB in excess of the standard LPC. This is thought to be due to an inability of the networks used to model the full dynamic complexity of the speech signal. However with the split function predictors it is shown that relatively high prediction gains can be achieved using a few simple sub-functions. With four linear sub-functions gains of 2.1 dB have been achieved over the standard LPC.
Style APA, Harvard, Vancouver, ISO itp.
3

Livescu, Karen 1975. "Analysis and modeling of non-native speech for automatic speech recognition". Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80204.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Payne, Nicole, i Saravanan Elangovan. "Musical Training Influences Temporal Processing of Speech and Non-Speech Contrasts". Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etsu-works/1565.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Makashay, Matthew Joel. "Individual Differences in Speech and Non-Speech Perception of Frequency and Duration". The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1047489733.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Mesgarani, Nima. "Discrimination of speech from non-speech based on multiscale spectro-temporal modulations". College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/3044.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Style APA, Harvard, Vancouver, ISO itp.
7

Payne, N., Saravanan Elangovan i Jacek Smurzynski. "Auditory Temporal Processing of Speech and Non-speech Contrasts in Specialized Listeners". Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etsu-works/2216.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Schmitz, Judith 1984. "On the relationship between native and non-native speech perception and speech production". Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/456304.

Pełny tekst źródła
Streszczenie:
Models of speech perception differ in the nature of the relationship between speech perception and production. Whether speech perception and production processes are based on a common representations ̶ the articulatory gesture ̶ or speech perception fundamentally operates on the acoustic code is highly debated. In three experimental studies, we investigated the nature of the relationship between speech perception and production. In the first study we found an active role of the speech production system in speech perception, even when listening to unfamiliar phonemes. In the second study we found no influence of a somatosensory manipulation applied to an articulator in passive speech perception. In the third study we showed that speech perception and production abilities are tightly related across phonological processes (sub-lexical and lexical) and participants’ languages (native ̶ L1 ̶and second language ̶ L2 ̶). The results suggest that speech perception and production are intimately linked.
Los modelos de la percepción del habla difieren sobre la naturaleza de la relación entre la percepción y la producción del habla. El debate se centra en si ambos procesos comparten como representación básica los gestos articulatorios o bien si la percepción del habla se basa en el código auditivo. Investigamos la naturaleza de la relación entre la percepción y producción del habla en tres estudios experimentales. El primer estudio mostró que el sistema de producción del habla participa activamente en la percepción. El segundo estudio no reveló influencias en la percepción pasiva del habla de una manipulación somatosensorial aplicada en un articulador. El tercer estudio mostró una fuerte relación entre las habilidades de la percepción y producción del habla en varios procesos fonológicos (sub-léxicos y léxicos) y lenguas conocidas por los participantes (primera y segunda lenguas). Los resultados sugieren que la percepción y producción del habla están íntimamente relacionadas.
Style APA, Harvard, Vancouver, ISO itp.
9

Cahillane, Marie Ann. "Contrasting effects of irrelevant speech and non-speech sounds on short-term memory". Thesis, Bath Spa University, 2008. http://researchspace.bathspa.ac.uk/1473/.

Pełny tekst źródła
Streszczenie:
The characteristics of speech that determine its greater disruption of serial recall relative to non-speech (the irrelevant sound effect) are investigated (c.f. Tremblay et al., 2000). Degraded non-words disrupted serial recall less than clear non-words. Tasks show that both vowels and consonants of degraded non-words were misperceived, with initial consonants misperceived to a greater degree. Measures that followed showed that clear sequences of non-words, with changing vowels were more disruptive than sequences with changing consonants. Degrading vowel only changing sequences reduced disruption of serial recall to a level observed with clear consonant only changing sequences, whereas degradation had no effect on disruption by consonant only changing sequences. In further experiments the acoustic complexity of speech was reduced while maintaining its intelligibility by removing fundamental frequency information. Whispered speech disrupted serial recall to the same degree as voiced speech. Alternating voiced and whispered speech sounds within a sequence did not reduce serial recall performance relative to a sequence of voiced-only speech sounds. Results indicate the formant structure of speech sounds and not fundamental frequency information is the important carrier of acoustic change. Reversing the fine structure of whispered speech damaged its intelligibility whilst preserving acoustic complexity and these sounds were as disruptive of serial recall as normal whispered speech. This indicates that vocal tract resonances (formants) of speech and not its intelligibility determine its disruptive power. The relative disruptiveness of speech and non-speech sounds was then examined. Sounds were matched for acoustic complexity, but their 'speech-likeness' was destroyed. Speech disrupted serial recall more than did non-speech. Results indicate that the biological nature of speech renders it more disruptive than non-speech. The findings refute the 'changingstate- hypothesis' which is derived from the object-oriented episodic record model. This hypothesis argues that it is the degree of acoustic variation within an irrelevant stream and not the nature of its component sounds which determines its disruption of serial memory. Biological sounds may disrupt serial memory to a greater degree since they are of behavioural relevance and provide information about the environment that may need to be attended to. The addition of an attentional mechanism to the object-oriented episodic record model that regulates the reallocation of cognitive processing resources is proposed.
Style APA, Harvard, Vancouver, ISO itp.
10

Maiste, Anita. "Human auditory event-related potentials to frequency changes in speech and non-speech sounds". Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5899.

Pełny tekst źródła
Streszczenie:
This thesis presents two approaches investigating how the human auditory system processes the brief frequency changes that occur in speech sounds. Section 1 of the thesis consists of a critical review of the literature on human auditory event-related potentials (ERPs) and speech perception. Section 2 consists of three experiments evaluating ERPs to non-speech frequency changes. The experiments evaluated steady state and transient auditory evoked potentials (EPs) to tones that were sinusoidally modulated in frequency and to tones that alternated between two frequencies with a linear ramp. The tones were presented at modulation rates typical of average syllable production. The steady state responses to sinusoidal FM were small and difficult to record at both the first and second harmonics. Ramp FM evoked larger and more consistent second harmonic steady state responses than the sinusoidal FM. Only the ramp FM stimuli elicited transient EPs and these only at low modulation rates. These responses were larger to upward ramps than to downward ramps. The response to two simultaneously presented ramp FM tones differed from the sum of responses to the individual tones indicating some interaction in the processing of the two stimuli. Since the first study found that steady state responses were not as reliable as ERPs to discrete frequency changes, the second study used speech sounds containing discrete frequency changes. In Section 3 of the thesis computer-modified speech sounds from the /ba/ to /da/ continuum were presented to reading subjects as a train of standard speech sounds interspersed with two types of infrequent deviant speech sounds. One deviant stimulus lay within the same category as the standard and the other lay across the categorical boundary from the standard, but both were acoustically equidistant from the standard in terms of the second formant transition. When the standard stimulus was drawn from the /ba/ end of the continuum, the across-category deviants elicited a clear mismatch negativity (MMN) in the auditory event-related potential whereas the within-category deviants did not. This MMN began at about 60-120 ms after stimulus onset and was present for several hundred ms. These results suggest that categorical processing of speech sounds occurs independently of attention at an early echoic memory stage. When the standard stimulus was drawn from the /da/ end of the continuum, the MMN to across-category deviants was not larger than the MMN to within-category deviants. Grand mean waveforms suggested that both deviant stimuli elicited small MMNs. Although this may indicate processing along an acoustic continuum, the results of the psychophysical tests suggest that the standard stimulus in this condition was too close to the category boundary for the deviants to evoke a consistent categorical mismatch.
Style APA, Harvard, Vancouver, ISO itp.
11

Kaipa, Ramesh. "Evaluation of principles of motor learning in speech and non-speech-motor learning tasks". Thesis, University of Canterbury. Communication Disorders, 2013. http://hdl.handle.net/10092/10349.

Pełny tekst źródła
Streszczenie:
Principles of motor learning (PMLs) refer to a set of concepts which are considered to facilitate the process of motor learning. PMLs can be broadly grouped into principles based on (1) the structure of practice/treatment, and (2) the nature of feedback provided during practice/treatment. Application of PMLs is most evident in studies involving non-speech- motor tasks (e.g., limb movement). However, only a few studies have investigated the application of PMLs in speech-motor tasks. Previous studies relating to speech-motor function have highlighted two primary limitations: (1) Failure to consider whether various PMLs contribute equally to learning in both non-speech and speech-motor tasks, (2) Failure to consider whether PMLs can be effective in a clinical cohort in comparison to a healthy group. The present research was designed to shed light on whether selected PMLs can indeed facilitate learning in both non-speech and speech-motor tasks and also to examine their efficacy in a clinical group with Parkinson’s disease (PD) in comparison to a healthy group. Eighty healthy subjects with no history of sensory, cognitive, or neurological abnormalities, ranging 40-80 years of age, and 16 patients with PD, ranging 58-78 years of age, were recruited as participants for the current study. Four practice conditions and one feedback condition were considered in the training of a speech-motor task and a non-speech- motor task. The four practice conditions were (1) constant practice, (2) variable practice, (3) blocked practice, and (4) random practice. The feedback was a combination of low-frequency, knowledge of results, knowledge of performance, and delayed feedback conditions, and was paired with each of the four practice conditions. The participants in the clinical and non-clinical groups were required to practise a speech and a non-speech-motor learning task. Each participant was randomly and equally assigned to one of the four practice groups. The speech-motor task involved production of a meaningless and temporally modified phrase, and the non-speech-motor task involved practising a 12-note musical sequence using a portable piano keyboard. Each participant was seen on three consecutive days: the first two days served as the acquisition phase and the third day was the retention phase. During the acquisition phase, the participants practised 50 trials of the speech phrase and another 50 trials of the musical tune each day, and each session lasted for 60-90 min. Performance on the speech and non-speech tasks was preceded by an orthographic model of the target phrase/musical sequence displayed on a computer monitor along with an auditory model. The participants were instructed to match their performance to the target phrase/musical sequence exactly. Feedback on performance was provided after every 10th trial. The nature of practice differed among the four practice groups. The participants returned on the third day for the retention phase and produced 10 trials of the target phrase and another 10 trials of the musical sequence. Feedback was not provided during or after the retention trials. These final trials were recorded for later acoustic analyses. The analyses focused on spatial and temporal parameters of the speech and non-speech tasks. Spatial analysis involved evaluating the production accuracy of target phrase/tune by calculating the percentage of phonemes/keystrokes correct (PPC/PKC). The temporal analysis involved calculating the temporal synchrony of the participant productions (speech phrase & tune) during the retention trials with the target phrase and tune, respectively, through the phi correlation. The PPC/PKC and phi correlation values were subjected to a series of mixed model ANOVAs. In the healthy subjects, the results of the spatial learning revealed that the participants learned the speech task better than the non-speech (keyboard) task. In terms of temporal learning, there was no difference in learning between the speech and non-speech tasks. On an overall note, the participants performed better on the spatial domain, rather than on the temporal domain, indicating a spatial-temporal trade-off. Across spatial as well as temporal learning, participants in the constant practice condition learned the speech and non-speech tasks better than participants in the other practice conditions. Another interesting finding was that there was an age effect, with the younger participants demonstrating superior spatial and temporal learning to that of the older participants, except for temporal learning on the keyboard task for which there was no difference. In contrast, the PD group showed no significant differences on spatial or temporal learning between any of the four practice conditions. Furthermore, although the PD patients had poorer performances than the healthy subjects on both the speech and keyboard tasks, they showed very similar pattern of learning across all four practice conditions to that of the healthy subjects. The findings in the current study tend to have potential applications in speech-language therapy, and are as follows: (1) a constant practice regime could be beneficial in developing speech therapy protocols to treat motor-based communication disorders (e.g., dysarthria), (2) speech therapists need to exercise caution in designing speech therapy goals incorporating similar PMLs for younger and older adults, as the application of similar PMLs in younger and older adults may bring about different learning outcomes, (3) and finally, it could be beneficial for patients to practise speech tasks which would require them to focus either on the spatial or temporal aspect, rather than focussing on both the aspects simultaneously.
Style APA, Harvard, Vancouver, ISO itp.
12

Pinard, Minola. "Non-linguistic versus linguistic processes in speech perception". Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=72057.

Pełny tekst źródła
Streszczenie:
Four studies were conducted in which three sets of tasks were devised which tapped in a standard format, progressively refined, nonlinguistic versus linguistic processes in speech processing. The third set of tasks gave the clearest results. In it, male and female franco-phone subjects of different ages and of varying degree of knowledge of English were tested. Three sets of consonant contrasts were used. A dichotomization into two separate processes was possible by finding expected differential patterns of development for the two tasks; we were able to postulate that the two processes were non-linguistic versus linguistic by finding expected specific patterns of development, specific patterns of sex by age similarities and differences, differential patterns of correlations between degree of bilingualism and consonant contrasts, and unexpectedly a different pattern of performance on one contrast, all according to task. The results are discussed mainly in relation to other experiments on "the phonetic mode".
Style APA, Harvard, Vancouver, ISO itp.
13

Hankinson, John C. K. "A grammatical approach to non-speech audio communication". Thesis, University of York, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341114.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Wu, Qiong. "Do Infants Discriminate Hyper-from Non-Hyperarticulated Speech?" Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/41287.

Pełny tekst źródła
Streszczenie:
Several studies have found that adult caretakers usually hyperarticulate to infants by modifying their voice in ways that promote and sustain infantsâ attention. This articulation when engaging in infant directed speech (IDS) can result in â clear speechâ by the expansion of the vowel space area. The degree of speech clarity produced by caregivers appears to provide advantages for young language learners to promote lexical perception and learning. However few studies have ever examined whether infants are able to perceive the difference between hyperarticulation and normal speech. In this study, 7-to 12-month-oldsâ (n=17) speech discrimination when hearing hyperarticulated and non-hyperarticulated words in mothersâ natural speech production was examined. The degree of speech clarity was determined by the relations of the first (F1) and second formant frequencies (F2) of the vowel. The result showed that there was no discrimination between listening to hyperarticulated and non-hyperarticulated words, indicating that the benefit accrued by exposure to clear speech may require no selective attention on the part of the infant. Thus the advantages of hyperarticulation might be related to other characteristics.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
15

Pauletto, Sandra. "Interactive non-speech auditory display of multivariate data". Thesis, University of York, 2007. http://etheses.whiterose.ac.uk/14192/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Steele, Ariana J. "Non-binary speech, race, and non-normative gender: Sociolinguistic style beyond the binary". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu157419067968368.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Cowling, Michael, i n/a. "Non-Speech Environmental Sound Classification System for Autonomous Surveillance". Griffith University. School of Information Technology, 2004. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20040428.152425.

Pełny tekst źródła
Streszczenie:
Sound is one of a human beings most important senses. After vision, it is the sense most used to gather information about the environment. Despite this, comparatively little research has been done into the field of sound recognition. The research that has been done mainly centres around the recognition of speech and music. Our auditory environment is made up of many sounds other than speech and music. This sound information can be taped into for the benefit of specific applications such as security systems. Currently, most researchers are ignoring this sound information. This thesis investigates techniques to recognise environmental non-speech sounds and their direction, with the purpose of using these techniques in an autonomous mobile surveillance robot. It also presents advanced methods to improve the accuracy and efficiency of these techniques. Initially, this report presents an extensive literature survey, looking at the few existing techniques for non-speech environmental sound recognition. This survey also, by necessity, investigates existing techniques used for sound recognition in speech and music. It also examines techniques used for direction detection of sounds. The techniques that have been identified are then comprehensively compared to determine the most appropriate techniques for non-speech sound recognition. A comprehensive comparison is performed using non-speech sounds and several runs are performed to ensure accuracy. These techniques are then ranked based on their effectiveness. The best technique is found to be either Continuous Wavelet Transform feature extraction with Dynamic Time Warping or Mel-Frequency Cepstral Coefficients with Dynamic Time Warping. Both of these techniques achieve a 70% recognition rate. Once the best of the existing classification techniques is identified, the problem of uncountable sounds in the environment can be addressed. Unlike speech recognition, non-speech sound recognition requires recognition from a much wider library of sounds. Due to this near-infinite set of example sounds, the characteristics and complexity of non-speech sound recognition techniques increases. To address this problem, a systematic scheme needs to be developed for non-speech sound classification. Several different approaches are examined. Included is a new design for an environmental sound taxonomy based on an environmental sound alphabet. This taxonomy works over three levels and classifies sounds based on their physical characteristics. Its performance is compared with a technique that generates a structured tree automatically. These structured techniques are compared for different data sets and results are analysed. Comparable results are achieved for these techniques with the same data set as previously used. In addition, the results and greater information from these experiments is used to infer some information about the structure of environmental sounds in general. Finally, conclusions are drawn on both sets of techniques and areas of future research stemming from this thesis are explored.
Style APA, Harvard, Vancouver, ISO itp.
18

Eaton, Derek James. "Non-intrusive estimation of acoustic parameters from degraded speech". Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/52637.

Pełny tekst źródła
Streszczenie:
Estimation of the acoustic parameters Signal-to-Noise Ratio (SNR), Reverberation Time (T60), Direct-to-Reverberant Ratio (DRR), and clipping level from degraded speech are open research questions. These parameters are important for determining speech quality and intelligibility, and they are widely applicable to speech enhancement and speech recognition systems. Whilst SNR, T60, and DRR are useful priors for dereverberation schemes, indications of clipping and the clipping level are useful for signal restoration. This thesis investigates how accurately and robustly to noise and, in the case of clipping detection, robustness to the coding and decoding process it is possible to estimate these parameters non-intrusively from degraded speech in real-time or near real-time, and introduces a range of novel algorithms. Alongside the algorithms, an international research challenge was staged for which a novel noisy reverberant speech corpus was developed to determine the state-of-the-art in T60 and DRR estimation. In tests, the algorithms presented in this thesis were highly competitive, being able to estimate T60 with Pearson correlation coefficient, ρ = 0.608 and DRR with ρ = 0.314. Both algorithms achieved very low computational complexity with Real-Time Factors (RTFs) of 0.0164 and 0.0589 respectively. The clipping detection algorithms achieved F1 approximately 0.6 for Global System For Mobile Communications (GSM) 06.10 decoded speech, babble noise at 20 dB SNR clipping levels in the range -5 to -20dBFS, and also produce an estimate of the original unclipped signal level.
Style APA, Harvard, Vancouver, ISO itp.
19

Harrison, Margaret Elizabeth. "Audiovisual Integration in Native and Non-native Speech Perception". Ohio University Honors Tutorial College / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1461330622.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Rhodes, Richard William. "Assessing the strength of non-contemporaneous forensic speech evidence". Thesis, University of York, 2012. http://etheses.whiterose.ac.uk/3935/.

Pełny tekst źródła
Streszczenie:
The aim of this thesis is to assess the impact of long term non-contemporaneity on the strength of forensic speech evidence. Speakers experience age-related changes to the voice over long delays and this time also presents the opportunity for social factors to vary. These changes are shown to impact on speech parameters used in forensic analyses. Using longitudinal data from the Up documentary series, this thesis analyses the effects of aging on forensically useful acoustic parameters in eight speakers at five seven-year intervals between ages 21 and 49. The investigation reveals significant age-related changes in real-time across adulthood. Frequencies of the first three formants in monophthongs /i: ɪ e a ɑ: ʌ ɒ ʊ & u:/ and diphthongs /eɪ & aɪ/ show comprehensive reduction. For monophthongs, F1 exhibits mean change of 8.5%, greater than F2 and F3 at 3.7% and 2.2% respectively. Vowel quality also impacts on magnitude of change in each formant. Estimations based on this data suggest that vocal tract extension and restricted articulator movement are probable drivers for acoustic change, operating on different timelines. Counter-examples to this aging pattern can generally be explained by social factors, as a result of mobility or in accordance with mainstream changes in a variety. Strength of evidence estimates for these non-contemporaneous data are calculated using a numerical likelihood ratio (LR) approach. Age-related changes result in weaker and fewer correct LRs with greater length delays. Cubic coefficients of diphthong formants are investigated in line with a formant dynamic approach. These LR tests show promising results and resilience to aging, especially in F1; tentatively suggesting that, for these speakers, some speaker-specific behaviour pervades in spite of physiological changes. This analysis raises several questions with regards to applying an overtly numerical LR approach where there is apparent mismatch between forensic samples. The effect of aging on an ASR system (BATVOX) is also tested for six male subjects. The system measures Mel-frequency cepstral coefficient (MFCC) parameters that reflect the physical properties of the vocal tract. Predicted degradation of the system’s performance with increasing age is apparent. The reduction in performance is significant, varies between speakers, and is striking in longer delays for all speakers. The degradation in strength of evidence for acoustic data from monophthongs and formant dynamic coefficients, as well as that for the ASR system, demonstrates that aging presents a real problem for forensic analysis in non-contemporaneous cases. Furthermore, aging also presents issues for speech databases for the purpose of assessing strength of evidence, where further research into distributions of parameters in different age groups is warranted.
Style APA, Harvard, Vancouver, ISO itp.
21

Cowling, Michael. "Non-Speech Environmental Sound Classification System for Autonomous Surveillance". Thesis, Griffith University, 2004. http://hdl.handle.net/10072/365386.

Pełny tekst źródła
Streszczenie:
Sound is one of a human beings most important senses. After vision, it is the sense most used to gather information about the environment. Despite this, comparatively little research has been done into the field of sound recognition. The research that has been done mainly centres around the recognition of speech and music. Our auditory environment is made up of many sounds other than speech and music. This sound information can be taped into for the benefit of specific applications such as security systems. Currently, most researchers are ignoring this sound information. This thesis investigates techniques to recognise environmental non-speech sounds and their direction, with the purpose of using these techniques in an autonomous mobile surveillance robot. It also presents advanced methods to improve the accuracy and efficiency of these techniques. Initially, this report presents an extensive literature survey, looking at the few existing techniques for non-speech environmental sound recognition. This survey also, by necessity, investigates existing techniques used for sound recognition in speech and music. It also examines techniques used for direction detection of sounds. The techniques that have been identified are then comprehensively compared to determine the most appropriate techniques for non-speech sound recognition. A comprehensive comparison is performed using non-speech sounds and several runs are performed to ensure accuracy. These techniques are then ranked based on their effectiveness. The best technique is found to be either Continuous Wavelet Transform feature extraction with Dynamic Time Warping or Mel-Frequency Cepstral Coefficients with Dynamic Time Warping. Both of these techniques achieve a 70% recognition rate. Once the best of the existing classification techniques is identified, the problem of uncountable sounds in the environment can be addressed. Unlike speech recognition, non-speech sound recognition requires recognition from a much wider library of sounds. Due to this near-infinite set of example sounds, the characteristics and complexity of non-speech sound recognition techniques increases. To address this problem, a systematic scheme needs to be developed for non-speech sound classification. Several different approaches are examined. Included is a new design for an environmental sound taxonomy based on an environmental sound alphabet. This taxonomy works over three levels and classifies sounds based on their physical characteristics. Its performance is compared with a technique that generates a structured tree automatically. These structured techniques are compared for different data sets and results are analysed. Comparable results are achieved for these techniques with the same data set as previously used. In addition, the results and greater information from these experiments is used to infer some information about the structure of environmental sounds in general. Finally, conclusions are drawn on both sets of techniques and areas of future research stemming from this thesis are explored.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information Technology
Full Text
Style APA, Harvard, Vancouver, ISO itp.
22

Zhao, Wanying. "An exploration of the integration of speech with co-speech gesture with non-invasive brain stimulation". Thesis, University of Hull, 2017. http://hydra.hull.ac.uk/resources/hull:16482.

Pełny tekst źródła
Streszczenie:
The current PhD project focuses on the integration of gesture with their co-occurring speech with the use of non-invasive brain stimulation. The project investigated ‘where’ and ‘when’ gesture-speech integration takes place. Building on the paradigm of Kelly et al., (2010) which provides a reaction time index of automatic gesture-speech integration, it was tested whether left middle temporal gyrus (pMTG) as well as left Inferior frontal gyrus (LIFG) are causally involved in gesture-speech integration. A follow-up study investigated the time window for this integration of gesture and speech in pMTG. This study found that gesture has a priming effect on the semantic retrieval of speech. This effect only manifested itself after gesture had been clearly understood and before the semantic analysis of speech. Based on the common coding hypothesis, this finding was interpreted in terms of gesture and speech originating from a common coding system, with both LIFG and pMTG as its neural underpining, enabling bi-directional influences between both domains.
Style APA, Harvard, Vancouver, ISO itp.
23

Keenaghan, Kevin Michael. "A Novel Non-Acoustic Voiced Speech Sensor Experimental Results and Characterization". Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0114104-144946/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Choo, Ai Leen. "An electromyographic examination of lip asymmetry during speech and non-speech oral movements in adults who stutter". Thesis, University of Canterbury. Psychology, 2008. http://hdl.handle.net/10092/1722.

Pełny tekst źródła
Streszczenie:
Past research investigating stuttering has cited atypical cerebral lateralization in adults who stutter (AWS) during speech production. The purpose of this study was to measure cerebral activation in AWS as indicated by lip asymmetry. The study included five AWS (mean age = 26 years of age) and five adults who do not stutter (AWNS) (mean age = 25 years of age). The tasks included single-word productions, single-sentence readings and lip pursings. The peak electromyographic (EMG) amplitude was determined for the left upper, right upper, left lower and right lower lip quadrants around the mouth. Overall, EMG amplitudes were higher for the lower lip than the upper lip. Based on examination of peak EMG amplitude, significant differences were found between speaker groups. For both speech and non-speech tasks, the highest EMG amplitude for the AWS and AWNS groups were on the left lower and right lower sides of the mouth, respectively. The AWNS group showed strong correlations in EMG activity across the four lip sites (r>0.97), indicating an overall synchronous lip activity during speech and non-speech tasks. In contrast, the AWS group showed a strong correlation (r=0.97) only for the left upper and left lower lips while the other lip pairings were not strongly correlated (r<0.738) indicating otherwise reduced synchronous lip activity. While the small sample size suggests caution, clear differences in the pattern of lip EMG activity demonstrated in the present study provides evidence of differences between AWS and AWNS in the cerebral activation governing lip movement. The greater left lip activity observed in AWS was indicative of greater right hemisphere cerebral activation while increased right lip activity was indicative of greater left hemisphere participation in AWNS. The results of the present study provided support for the hypotheses of reversed lateralization for speech and non-speech processing and reduced coordination of speech musculature in AWS.
Style APA, Harvard, Vancouver, ISO itp.
25

Takano, Shoji 1961. "The myth of a homogeneous speech community: The speech of Japanese women in non-traditional gender roles". Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/282516.

Pełny tekst źródła
Streszczenie:
The overall objective of this dissertation research was to account for heterogeneous language use closely linked to changes in speakers' social lives and ultimately to provide empirical evidence against a mythical, stereotyped view of Japan as a homogeneous speech community. As the most revealing variable, I have focused on the speech of Japanese women whose gender roles have undergone drastic transformation in contemporary society. The research consists of two particular phases of investigation. The first phase involves using the variationist approach to analyze the speech of three groups of women leading distinctive social lives: full-time homemakers, full-time working women in clerical positions and those in positions of authority. The results refute as overgeneralizations the claims of past mainstream work on Japanese gender differentiation, which has consistently defined women's language use based exclusively on middle-class full-time homemakers under the influence of the traditional ideology of complementary gender roles. Variable rule analysis reveals that differential performance grammars are operating among the three groups of women, and that the inter-group differentiation can be interpreted as social stratification more meaningfully correlated with speakers' concrete occupation-bound categories than abstract ones such as social class membership. Potential causes for such differentiation are accounted for in terms of speakers' everyday contacts with people and types of communicative routines and experiences in their occupation-bound communication networks. The second phase of the investigation sheds light on the sociolinguistic dilemmas Japanese working women in positions of leadership are likely to face. Working women in charge, a newly emerging group of women in non-traditional gender roles, tend to confront contradictions between the culturally prescribed ways of speaking for women (i.e., speaking politely, indirectly, deferentially) and the communicative requirements of their occupational status. Both quantitative and qualitative analyses of directive speech acts at a number of workplaces reveal that working women in charge characteristically use a variety of innovative sociolinguistic strategies to resolve such dilemmas. These strategies include de-feminization of overtly feminine morphosyntactic structures, contextualization to compensate for the indirect framing of directives, linguistic devices to mask power/status asymmetries with subordinates and promote collaborative rapport and peer solidarity, style-shifting of the predicate to negotiate the distribution of power, and strategic uses of polite language as an indexicality of their occupational status and identity rather than as a marker of powerlessness in conflict talk.
Style APA, Harvard, Vancouver, ISO itp.
26

Raab, Martin. "Real world approaches for multilingual and non-native speech recognition". Berlin Logos-Verl, 2010. http://d-nb.info/1002021049/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Pye, A. "The perception of emotion and identity in non-speech vocalisations". Thesis, Bangor University, 2015. https://research.bangor.ac.uk/portal/en/theses/the-perception-of-emotion-and-identity-in-nonspeech-vocalisations(efff271d-3c3a-4a39-9ccb-b51cadb937e8).html.

Pełny tekst źródła
Streszczenie:
The voice contains a wealth of information relevant for successful and meaningful social interactions. Aside from speech, the vocal signal also contains paralinguistic information such as the emotional state and identity of the speaker. The three empirical chapters reported in this thesis research the perceptual processing of paralinguistic vocal cues. The first set of studies uses unimodal adaptation to explore the mental representation of emotion in the voice. Using a series of different adaptor stimuli -human emotional vocalisations, emotive dog calls and affective instrumental bursts- it was found that aftereffects in human vocal emotion perception were largest following adaptation to human vocalisations. There was still an aftereffect present following adaptation to dog calls, however it was smaller in magnitude than that of the human vocalisation aftereffect and potentially as a result of the acoustic similarities between adaptor and test stimuli. Taken together, these studies suggest that the mental representation of emotion in the voice is not species specific but is specific to vocalisations as opposed to all affective auditory stimuli. The second empirical chapter examines the supramodal relationship between identity and emotion in face-voice adaptation. It was found that emotional faces have the ability to produce aftereffects in vocal emotion perception, irrespective of the identity of the adaptor and test stimuli being congruent. However, this effect was found to be dependent upon the adapting stimuli being dynamic as opposed to static in nature. The final experimental chapter looks at the mechanisms underlying the perception of vocal identity. A voice matching test was developed and standardised, finding large individual differences in voice matching ability. Furthermore, in an identity adaptation experiment, absolute difference in aftereffect size demonstrated a trend towards significance when correlated with voice matching ability, suggesting that there is a relationship between perceptual abilities and the degree of plasticity observed in response adaptation.
Style APA, Harvard, Vancouver, ISO itp.
28

Martin, Joshua Michael. "Structural differences in REM and Non-REM dream reports assessed by non-semantic speech graph analysis". PROGRAMA DE P?S-GRADUA??O EM PSICOBIOLOGIA, 2017. https://repositorio.ufrn.br/jspui/handle/123456789/24574.

Pełny tekst źródła
Streszczenie:
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2018-01-15T22:16:15Z No. of bitstreams: 1 JoshuaMichaelMartin_DISSERT.pdf: 2221771 bytes, checksum: 74b1d657dbbdcac47f35b00b842db1bc (MD5)
Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2018-01-18T14:37:21Z (GMT) No. of bitstreams: 1 JoshuaMichaelMartin_DISSERT.pdf: 2221771 bytes, checksum: 74b1d657dbbdcac47f35b00b842db1bc (MD5)
Made available in DSpace on 2018-01-18T14:37:21Z (GMT). No. of bitstreams: 1 JoshuaMichaelMartin_DISSERT.pdf: 2221771 bytes, checksum: 74b1d657dbbdcac47f35b00b842db1bc (MD5) Previous issue date: 2017-12-08
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES)
A diferen?a entre a menta??o experimentada durante o sono de oculares r?pidos (REM) e o sono n?o-REM persiste como quest?o importante para investiga??o no campo de pesquisa dos sonhos. Estudos anteriores t?m mostrado que os relatos de sonho documentados depois do REM s?o, em m?dia, mais longos, v?vidos, bizarros, emocionais e com aspectos mais narrativos do que os relatos do n?o-REM. Apesar desses achados, falta uma compara??o estrutural entre relatos de sonho do REM e n?o-REM no que diz respeito ? organiza??o de palavra-a-palavra, e diversas medidas tradicionais de sonhos podem ser confundidas pelo comprimento do relato. A an?lise de fala transformada em grafos direcionados de palavras pode ser aplicada para fazer uma avalia??o estrutural de relatos verbais e tamb?m para controlar as diferen?as individuais de verbosidade. No presente estudo, tivemos como objetivo investigar as poss?veis diferen?as na conectividade dos relatos e sua aproxima??o a uma estrutura aleat?ria atrav?s da an?lise de grafos em 125 relatos de sonho obtidos por 19 participantes em despertares controlados nas fases de sono REM e N2. Constatou-se que: (1) grafos do REM possuem uma conectividade maior do que os do N2; entretanto, essas diferen?as n?o foram refletidas na aproxima??o a um grafo rand?mico; (2) diversas medidas de grafo podem predizer avalia??es externas da complexidade do sonho, onde a conectividade aumenta e sua natureza rand?mica cai em rela??o ? complexidade do relato; e (3) o Componente Maior Conectado (LCC) do grafo pode melhorar o ajuste de um modelo contendo o comprimento do relato como vari?vel no discernimento da fase do sono e na predi??o da complexidade do sonho. Esses resultados sugerem que os relatos do REM possuem uma conectividade maior do que os relatos do N2 (i.e. as palavras recorrem com uma dist?ncia maior), o que, em nossa vis?o, est? relacionado a diferen?as subjacentes na complexidade dos sonhos. Esses achados tamb?m apontam para a an?lise de grafos como um m?todo promissor no campo dos sonhos, devido ? sua rela??o com a complexidade do sonho e ao seu potencial de atuar como uma medida complementar ao comprimento do relato.
The extent to which Rapid Eye Movement Sleep (REM) mentation may differ to that of non-REM remains an important area of enquiry in dream research. Previous studies have found that dream reports collected after REM awakenings are, on average, longer, more vivid, bizarre, emotional and story-like compared to those collected after non-REM. Despite this, a comparison of the word-to-word structural organisation of dream reports is lacking, and traditional measures that distinguish REM and non-REM dreaming may be confounded by report length. The analysis of speech as directed word graphs can be suitably applied, as it provides a structural assessment of verbal reports, while controlling for differences in verbosity. In the present study, we aimed to investigate the differences in the connectedness of dream reports and their approximation to a random-like structure through applying speech graph analysis to 125 mentation reports obtained from 19 participants in controlled laboratory awakenings from REM and N2 sleep. We found that: (1) transformed graphs from REM possess a larger connectedness compared to those from N2; (2) measures of graph structure can predict ratings of dream complexity, where increases in connectedness and decreases in their random-like nature are observed in relation to increasing dream report complexity; and (3) the Largest Connected Component (LCC) can improve a model containing report length in predicting sleep stage and dream complexity. These results suggest that REM dream reports have a larger connectedness compared to N2 (i.e. words recur with a longer range), which we interpret to be related to underlying differences in dream complexity. They also point to speech graph analysis as a promising method for dream research, due to its relation to dream complexity and its potential to complement report length in dream analysis.
Style APA, Harvard, Vancouver, ISO itp.
29

Pearman, Andrea. "Native & non-native perception of casual speech: English & Catalan". Doctoral thesis, Universitat Autònoma de Barcelona, 2007. http://hdl.handle.net/10803/4914.

Pełny tekst źródła
Streszczenie:
Un aspecto fundamental en la percepción del habla es cómo se transforma la señal acústica en unidades significativas y se reconocen palabras. El habla coloquial se ve frecuentemente afectada por procesos de reducción fonética que son comunes y productivos, p.e., asimilación, lenición, elisión, etc. Los oyentes (nativos), sin embargo, pueden reconocer fácilmente las consecuencias acústicas de estos procesos y entender el mensaje del hablante. Este estudio examina el procesamiento del habla informal por parte de nativos y no nativos. Estudios sobre el Modelo de Asimilación Perceptivo ('Perceptual Assimilation Model,' p.e., Best, 1995) muestran que los hablantes tienden a asimilar los sonidos de una lengua extranjera (L2) a la categoría más próxima en su lengua materna (L1) y que la percepción de los sonidos de la L2 se puede predecir en base a cómo se asimilan a los sonidos de la L1. Este estudio elabora las predicciones de PAM y las extiende al procesamiento del habla coloquial en inglés y catalán por parte de hablantes nativos y no-nativos. En particular, examina si los hablantes no nativos tienen más facilidad para interpretar los procesos de reducción en la L2 que también se dan en su lengua (L1) en contextos parecidos (mismo proceso, mismo contexto), que procesos que se dan en contextos distintos (mismo proceso, contexto distinto) o procesos de la L2 que no ocurren en la L1 (proceso distinto). Con este objetivo, una frase en inglés, Is your friend the one that can't go to bed by ten, y una frase en catalán, Em sap greu que cap del dos xicots no em pugui donar un cop de mà, que presentaban diversos casos de asimilación, lenición y elisión, fueron segmentadas en fragmentos de 80 ms como en Shockey (1997, 1998, 2003). Los fragmentos se presentaron a 24 hablantes nativos (12 ingleses y 12 catalanes), y 24 no nativos (12 catalanes y 12 ingleses) con un nivel avanzado de la L2. Las respuestas se analizaron en función del reconocimiento de palabras y de las "confusiones." Los resultados muestran que los hablantes no nativos presentan porcentajes de reconocimiento de palabras más bajos y el reconocimiento ocurre más tarde en general, además de más procesamiento de abajo hacia arriba ('bottom-up' o procesamiento fonético), tanto para los datos del inglés como del catalán. Los resultados también muestran que los no nativos son mejores a la hora de reconocer palabras reducidas por procesos existentes en la L1 en el mismo contexto, que por procesos que se dan en un contexto distinto o por procesos que no se dan en la L1. También se han identificado otros factores importantes en el reconocimiento del habla reducida, en particular factores relacionados con la frecuencia. Por último, se considera cómo los resultados contribuyen a la modelización del procesamiento del habla.
A fundamental problem concerning speech perception is how listeners transform the acoustic signal into meaningful units and recognize words. Normal speech is often (heavily) affected by common, productive reduction processes, e.g., assimilation, weakening, deletion, etc. Despite this, (native) listeners are easily able to undo the acoustic consequences of these processes and understand the speaker's intended message. This study examines native and non-native processing of casual speech. Research related to the Perceptual Assimilation Model (PAM, e.g., Best, 1995) evidences that listeners tend to assimilate foreign sounds to the closest L1 category and that perception of L2 sounds may be predicted on the basis of how they assimilate to L1 sounds. This study extends the predictions of PAM to the processing of English and Catalan casual speech in native and non-native speakers. Specifically, it examines whether non-natives are better at interpreting the results of common L2 reduction processes that occur in contexts similar to the L1 (same process, same context) than L2 processes which occur in different contexts (same process, different context) or L2 processes which do not occur in the L1 (different process). A highly reduced English sentence, Is your friend the one that can't go to bed by ten, and Catalan sentence, Em sap greu que cap dels dos xicots no em pugui donar un cop de mà, affected by assimilation, weakening, and deletion, were gated in 80 ms steps as in Shockey (1997, 1998, 2003). Gates were presented to 24 natives (12 English and 12 Catalan) and 24 non-natives (12 Catalan and 12 English) with an advanced command of the language. Responses were examined in terms of successful recognition and "confusions." Results show that non-native speakers exhibit generally lower and later lexical recognition, in addition to greater bottom-up (phonetic processing) than native speakers, both for the English and Catalan data. Moreover, the data bear out that non-natives are generally better at recognizing words affected by reduction processes existing in the L1 in the same context, than those occurring in a different context or not occurring in the L1. Other factors, particularly frequency, were identified as also important. Finally, the results are considered in terms of relevant issues in speech processing modeling.
Style APA, Harvard, Vancouver, ISO itp.
30

Ng, W. P. "An in-service non-intrusive measurement device for characterising speech networks". Thesis, Swansea University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.638322.

Pełny tekst źródła
Streszczenie:
The work presented in this thesis evaluates echo path modelling using an in-service non-intrusive method. The in-service non-intrusive measurement devices (INMDs) are based on least mean square (LMS) digital adaptive filters (DAFs). The modelling convergence rate (misadjustment) is derived from the optimal Wiener weights, and is used to define the performance criterion and the excitation for the DAFs is conversational speech. One second adaptation is allowed before the DAF's tap weights are interrogated in order to determine the echo path response. Hence all improvements quoted are based on one second adaptation. Speech driven INMDs produced a 'step' change in misadjustment during adaptation in noise free conditions. The unvoiced segments in speech produced a faster convergence compared to the voiced segments. In a noisy environment, however, the low energy unvoiced segments are masked by noise, thus producing divergence. Novel techniques were developed to minimise divergence in a noisy environment. The methods developed include divergence detectors (DDs) and an adaptive step-size algorithm. Implementing DDs revealed that a long running energy based detector produced a better divergence minimisation and showed an improvement of 28 dB, in terms of misadjustment, at echo to noise ratio (e/N) of 0 dB after one second adaptation. Meanwhile, the adaptive step-size algorithm where the instantaneous step-size is derived from the correlation of the input speech, showed a 31.3 dB improvement under the same conditions. The investigation of the LMS convergence properties revealed that, white signals gave the fastest convergence rate. Hence, three new whitening techniques were designed, Fast Fourier Transform Least Mean Square (FFTLMS), parallel DAF measurement device (PDMD) and chaotic LMS. The FFTLMS method employed the reciprocal input spectrum to flatten the power spectral density (PSD) spread and yielded an improvement of 22.2 dB, while the PDMD method makes use of the LMS stochastic behaviour upon convergence to provide similar whitening effect and an improvement of 23.8 dB was achieved. Both methods involve the pre-filtering the input and echo path speech with whitening coefficients generated by the respective methods. The other method proposed in this thesis is the chaotic LMS, where the input speech in chaotically coded before adaptation. The coded speech has a white spectral density and higher overall energy contents. The performance using the coded speech is similar to white noise, and the improvement achieved was 34.3 dB after 125 ms.
Style APA, Harvard, Vancouver, ISO itp.
31

Chu, Kam Keung. "Feature extraction based on perceptual non-uniform spectral compression for noisy speech recognition /". access full-text access abstract and table of contents, 2005. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?mphil-ee-b19887516a.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (M.Phil.)--City University of Hong Kong, 2005.
"Submitted to Department of Electronic Engineering in partial fulfillment of the requirements for the degree of Master of Philosophy" Includes bibliographical references (leaves 143-147)
Style APA, Harvard, Vancouver, ISO itp.
32

Stibbard, Richard. "Vocal expressions of emotions in non-laboratory speech : an investigation of the Reading/Leeds Emotion in Speech Project annotation data". Thesis, University of Reading, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343222.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Kulakcherla, Sudheer. "Non [sic] linear adaptive filters for echo cancellation of speech coded signals /". free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p1426079.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Fagelson, Marc A. "Tinnitus and Hyperacusis Management: Non-Auditory". Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etsu-works/1675.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Sanders, Lisa Diane. "Speech segmentation by native and non-native speakers : behavioral and event-related potential evidence /". view abstract or download file of text, 2001. http://wwwlib.umi.com/cr/uoregon/fullcit?p3018392.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of Oregon, 2001.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 215-239). Also available for download via the World Wide Web; free to University of Oregon users.
Style APA, Harvard, Vancouver, ISO itp.
36

Brewster, Stephen. "Providing a structured method for integrating non-speech audio into human-computer interfaces". Thesis, University of York, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241055.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Hammonds, Phillip Edward. "Directive speech acts in conflict situations among advanced non-native speakers of English". Diss., The University of Arizona, 2001. http://hdl.handle.net/10150/252893.

Pełny tekst źródła
Streszczenie:
This study investigates tasks in which international graduate students who are non-native speakers of English must use a second or foreign language (L2) in simulated conflict and stressful situations with native speakers. In particular, the study examines conflicts where the non-native speaker (NNS) must issue a directive to a native speaker (NS) in order to achieve an important outcome or avoid unwanted or even dangerous consequences. Unlike previous studies which place equal or no emphasis on the consequences of the directive under investigation, this study focuses on the perlocutionary effect that the speaker anticipates as a result of the utterance of a directive. Although this is an empirical study, it also critically examines the directive as a macro or discursive speech act colored by the relationships Power, Distance and perceived Consequences of the speaker based on the context of the situation in which it is uttered. The analysis of the data reveals that most advanced NNS have difficulty in high stakes situations based on a comparison of their directives to NS directives, supporting the hypothesis that the encoding of power in a directive is essential to the NNS as well as to the NS in attaining or avoiding some important result. The qualitative evidence further suggests that an important source of this difficulty is the constant awareness that even the advanced NNS is still a NNS and this produces a diminished sense of power relative to NSs.
Style APA, Harvard, Vancouver, ISO itp.
38

Richardson, Nathan Edward. "The Effect of Non-native Dialect on Speech Recognition Threshold for Native Mandarin Speakers". BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1333.

Pełny tekst źródła
Streszczenie:
Speech recognition thresholds are used for several clinical purposes, so it is important that they be accurate reflections of hearing ability. Variations in the acoustic signal may artificially decrease threshold scores, and such variations can result from being tested in a second dialect. Thirty-two native Mandarin-speaking subjects (sixteen from mainland China and sixteen from Taiwan) participated in speech recognition threshold testing in both dialects to see whether using non-native dialect test materials resulted in a significantly lower score. In addition, tests were scored by two interpreters, one from each dialect, to see whether the scorer's dialect resulted in a significantly different score. Talker dialect was found to be statistically significant, while scorer dialect was not. Factors explaining these findings, as well as clinical implications, are discussed.
Style APA, Harvard, Vancouver, ISO itp.
39

Olson, Marcia Ann. "Speech Recognition with Linear and Non-linear Amplification in the Presence of Industrial Noise". PDXScholar, 1996. https://pdxscholar.library.pdx.edu/open_access_etds/5167.

Pełny tekst źródła
Streszczenie:
In order to help reduce hearing loss, the Occupational Safety and Health Administration regulates noise levels in work environments. However, hearing aids are the primary rehabilitative service provided for individuals with an occupational hearing loss. Very little is being done to monitor hearing aid use in the work environment. Noise which may be safe to an unaided ear can amplified to levels that are damaging to the ear when a hearing aid is being worn. However, it is necessary for some individuals to wear amplification in these noisy environments for safety reasons. As a consequence it is important that these individuals be able to understand speech in the presence of industrial noise while wearing amplification. The purpose of this study was to determine if there is a significant difference in speech intelligibility between linear hearing aids and different types of non-linear hearing aids when they are used in the presence of industrial noise. Twenty-four normal hearing subjects were selected for this study. Each subject was ask to identify words in four CID W-22 lists which had been recorded through a linear hearing aid and two different non-linear hearing aids. Test results showed significantly better word· recognition for the linear in quiet condition over all other conditions. Significantly higher scores were obtained for the TILL condition than were obtained for the Linear in noise and the BILL condition. These preliminary results suggest that an individual wearing amplification in a noisy work environment would benefit with a TILL circuit. The TILL circuit would provide better speech intelligibility in this type of environment. Therefore, providing a safer work environment for the hearing aid user.
Style APA, Harvard, Vancouver, ISO itp.
40

Talwar, Gaurav. "HMM-based non-intrusive speech quality and implementation of Viterbi score distribution and hiddenness based measures to improve the performance of speech recognition". Laramie, Wyo. : University of Wyoming, 2006. http://proquest.umi.com/pqdweb?did=1288654981&sid=7&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Navarrete, Sánchez Eduardo. "Phonological activation of non-produced words. The dynamics of lexical access in speech production". Doctoral thesis, Universitat de Barcelona, 2007. http://hdl.handle.net/10803/399981.

Pełny tekst źródła
Streszczenie:
Speaking can be considered a goal-directed behavior because speakers have to retrieve the appropriate words and phonemes from their mental lexicon. However, observational and experimental evidence suggests that during the lexical and phonological retrieval processes other words than the intended ones are activated to some degree. Under this scenario, it is necessary to postulate selection mechanisms in charge of determining, among the activated representations, which ones will be prioritized and further processed in order to finally utter the speech signal. How does the control mechanism work that allows speakers to focus on the appropriate set of representations and reject the non-appropriate ones? It is generally agreed that the most relevant parameter that guides word and phoneme selection is the level of activation of the corresponding representations, in the sense that the most activated representations at a specific moment will be the ones selected. In addition, theories of speech production agree that the selection mechanisms also take into account the activation level of other non-target representations, in the sense that the selection of one representation is more difficult the more activated other competing representations are. According to these two assumptions, the selection of a word would depend on two parameters: a) the amount of activation that this word receives from the conceptual system and b) the level of activation of other representations at the moment of selection. In order to have a clear understanding of the mechanisms that speakers employ to decide which representations to select, we first need to specify under which circumstances this selection mechanism takes place. In particular, this dissertation tries to describe the pattern of activation during lexical access. Specifically, which words and phonemes are activated during the lexicalization process of the intended concept? This is an important issue because the types of processes in charge of encoding/selecting information at each level of the system may differ depending on what other information is available at a particular moment. For instance, the selection of the word ‘car’ and its corresponding phonemes may depend on whether other words and phonemes are also activated or not. The main purpose of this dissertation is to explore whether concepts outside of the communicative goal of the speaker are nevertheless activated in the process of language production. We assess whether there is lexical and phonological activation of these concepts. We take an experimental approach and measure speakers’ performance in different naming contexts. In particular, participants were instructed to name target stimuli while ignoring the presentation of distractor pictures. The semantic and phonological manipulations between target and distractor names allowed us to analyze whether participants have lexicalized the distractor picture and to what degree. In the next chapter we introduce the functional architecture of the speech production system. In the first section we describe the architecture of the system and then we focus on describing how information is propagated between the different levels of the system. This is the main topic of the dissertation and in the rest of the chapter we introduce three theoretical proposals about the propagation of the information and also some experimental evidence. Chapter three contains the main aim and specific objectives of the thesis. Chapters four, five, six and seven contain the experimental part. Finally, in chapters eight and nine we discuss the theoretical implications that follow from our experiments.
Hablar es, sin duda alguna, una de las capacidades más asombrosas que los seres humanos adquieren. Una de las cuestiones que más interesa a los psicólogos que estudian la producción oral del lenguaje es la descripción de los procesos y mecanismos mediante los cuales el hablante recupera las palabras de su memoria. La presente tesis está relacionada con esta cuestión. La producción del habla implica el acceso a representaciones léxicas y fonológicas muy concretas. Evidencia observacionale y experimental sugiere que durante el acceso léxico y fonológico otras palabras pueden estar activadas y llegar incluso a interferir. Por lo tanto, parece necesario postular un mecanismo que permita al hablante acceder a las palabras adecuadas y rechazar aquellas que, pese a no formar parte de la intención comunicativa, hayan podido ser activadas. Los modelos de producción coinciden en postular que el parámetro que guía la selección léxica y fonológica es el nivel de activación de las representaciones, en el sentido de que la representación más activada en un determinado momento es la que finalmente resulta seleccionada. Los modelos también consideran que esta selección depende del nivel de activación de otras representaciones, en el sentido de que resulta más difícil seleccionar una representación cuanto más activadas están otras representaciones ajenas a la intención comunicativa. Esta tesis describe las circunstancias en las que se produce la selección léxica y la recuperación fonológica durante la producción del habla. Concretamente, ¿qué palabras y fonemas están activados durante el proceso de lexicalización del mensaje comunicativo? En la tesis analizamos si conceptos que no forman parte del mensaje preverbal del hablante llegan a activar sus correspondientes representaciones léxicas y fonológicas. En los experimentos de esta tesis, los participantes nombran un estímulo a la vez que ignoran la presencia de dibujos distractores. La manipulación de la relación semántica y fonológica entre el nombre del estímulo y el distractor permite analizar hasta qué punto se ha lexicalizado el dibujo distractor.
Style APA, Harvard, Vancouver, ISO itp.
42

Samokhina, Natalya. "Phonetics and Phonology of Regressive Voicing Assimilation in Russian Native and Non-native Speech". Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/194543.

Pełny tekst źródła
Streszczenie:
In recent years, a great deal of research on second language (L2) acquisition has been concerned with non-target production of L2 learners, addressing issues such as native language (L1) transfer into L2 and the nature and source of developmental errors. Previous studies have mostly focused on the analysis of discrete L2 segments (Flege 1987, 1999; Major & Kim 1996), rather than on L2 phonological patterns. This study, however, examines the production of sequences of sounds in Russian L1 and L2 from both the phonetic and phonological perspectives.This dissertation investigates native and non-native production of real and nonsense words containing obstruent clusters in which a phonological phenomenon known as regressive voicing assimilation is required. In Russian, forms like lodka `boat' are rendered orthographically with a voiced obstruent which is pronounced as a voiceless one when followed by a voiceless obstruent. The results of the experiments reveal several production patterns in L1 and L2 speech as well as gradiency in devoicing which are further analyzed within the stochastic Optimality Theory framework. Categorical production is accounted for by the re-ranking of L1 and L2 constraints; whereas, gradiency in production is viewed as a result of the re-ranking of constraints within phonetically detailed constraint families.
Style APA, Harvard, Vancouver, ISO itp.
43

Alharbi, Saad Talal. "Graphical and non-speech sound metaphors in email browsing : an empirical approach : a usability based study investigating the role of incorporating visual and non-speech sound metaphors to communicate email data and threads". Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4244.

Pełny tekst źródła
Streszczenie:
This thesis investigates the effect of incorporating various information visualisation techniques and non-speech sounds (i.e. auditory icons and earcons) in email browsing. This empirical work consisted of three experimental phases. The first experimental phase aimed at finding out the most usable visualisation techniques for presenting email information. This experiment involved the development of two experimental email visualisation approaches which were called LinearVis and MatrixVis. These approaches visualised email messages based on a dateline together with various types of email information such as the time and the senders. The findings of this experiment were used as a basis for the development of a further email visualisation approach which was called LinearVis II. This novel approach presented email data based on multi-coordinated views. The usability of messages retrieval in this approach was investigated and compared to a typical email client in the second experimental phase. Users were required to retrieve email messages in the two experiments with the provided relevant information such as the subject, status and priority. The third experimental phase aimed at exploring the usability of retrieving email messages by using other type of email data, particularly email threads. This experiment investigated the synergic use of graphical representations with non-speech sounds (Multimodal Metaphors), graphical representations and textual display to present email threads and to communicate contextual information about email threads. The findings of this empirical study demonstrated that there is a high potential for using information visualisation techniques and non-speech sounds (i.e. auditory icons and earcons) to improve the usability of email message retrieval. Furthermore, the thesis concludes with a set of empirically derived guidelines for the use of information visualisation techniques and non-speech sound to improve email browsing.
Style APA, Harvard, Vancouver, ISO itp.
44

Alharbi, Saad T. "Graphical and Non-speech Sound Metaphors in Email Browsing: An Empirical Approach. A Usability Based Study Investigating the Role of Incorporating Visual and Non-Speech Sound Metaphors to Communicate Email Data and Threads". Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4244.

Pełny tekst źródła
Streszczenie:
This thesis investigates the effect of incorporating various information visualisation techniques and non-speech sounds (i.e. auditory icons and earcons) in email browsing. This empirical work consisted of three experimental phases. The first experimental phase aimed at finding out the most usable visualisation techniques for presenting email information. This experiment involved the development of two experimental email visualisation approaches which were called LinearVis and MatrixVis. These approaches visualised email messages based on a dateline together with various types of email information such as the time and the senders. The findings of this experiment were used as a basis for the development of a further email visualisation approach which was called LinearVis II. This novel approach presented email data based on multi-coordinated views. The usability of messages retrieval in this approach was investigated and compared to a typical email client in the second experimental phase. Users were required to retrieve email messages in the two experiments with the provided relevant information such as the subject, status and priority. The third experimental phase aimed at exploring the usability of retrieving email messages by using other type of email data, particularly email threads. This experiment investigated the synergic use of graphical representations with non-speech sounds (Multimodal Metaphors), graphical representations and textual display to present email threads and to communicate contextual information about email threads. The findings of this empirical study demonstrated that there is a high potential for using information visualisation techniques and non-speech sounds (i.e. auditory icons and earcons) to improve the usability of email message retrieval. Furthermore, the thesis concludes with a set of empirically derived guidelines for the use of information visualisation techniques and non-speech sound to improve email browsing.
Taibah University in Medina and the Ministry of Higher Education in Saudi Arabia.
Style APA, Harvard, Vancouver, ISO itp.
45

Perez, Rachel. "Perspectives of Bilingual Speech-Language Pathology Assistants (SLPAs)| Are They Prepared to Assist with Non-Biased Assessments?" Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10750207.

Pełny tekst źródła
Streszczenie:

A central challenge in California is how best to provide speech and language services to linguistically and culturally diverse (CLD) populations, given that only a small percentage of speech-language pathologists (SLPs) identify as bilingual. The present thesis investigated whether bilingual speech-language pathology assistants (SLPAs) can serve as suitable collaborators with SLPs in the process of carrying out screenings and assessments of CLD students/clients. A survey was administered to 6 bilingual SLPAs who reported that they currently assist with bilingual assessment. The results revealed that these participants expressed confidence in their ability to assist in assessments of CLD students/clients. This confidence seems to stem from their linguistic fluency, as well as from their cultural competency. Moreover, these SLPAs reported making use of materials and procedures identified as best practices. However, training for assisting in CLD assessments was largely obtained during work experience, not from formal coursework. Future research will be needed to identify how SLPA training programs can best train bilingual SLPAs to competently assist in CLD assessments in California schools and clinics.

Style APA, Harvard, Vancouver, ISO itp.
46

Christmann, Corinna [Verfasser], i Thomas [Akademischer Betreuer] Lachmann. "The role of stimulus complexity in auditory research of speech and non-speech on the behavioral and electrophysiological level / Corinna Christmann. Betreuer: Thomas Lachmann". Kaiserslautern : Technische Universität Kaiserslautern, 2014. http://d-nb.info/1048047008/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Leplâtre, Grégory. "The design and evaluation of non speech sounds to support navigation in restricted display devices". Thesis, University of Glasgow, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270963.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Hawksley, Andrea Johanna. "An online system for entering and annotating non-native Mandarin Chinese speech for language teaching". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46128.

Pełny tekst źródła
Streszczenie:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (leaves 59-62).
This thesis describes the design and implementation of an intuitive online system for the annotation of non-native Mandarin Chinese speech by native Chinese speakers. This system will allow speech recognition researchers to easily generate a corpus of labeled non-native speech. We have five native Chinese speakers test the annotation system on a sample bank of 250 Chinese utterances and observe fair to moderate inter-rater agreement scores. In addition to giving us a benchmark for inter-rater agreement, this also demonstrates the feasibility of having remote graders annotate sets of utterances. Finally, we extend our work to Chinese language instruction by creating a web-based interface for Chinese reading assignments. Our design is a simple, integrated solution for completing and correcting of spoken reading assignments, that also streamlines the compilation of a corpus of labeled non-native speech for use in future research.
by Andrea Johanna Hawksley.
M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
49

Kilman, Lisa. "Lost in Translation : Speech recognition and memory processes in native and non-native language perception". Doctoral thesis, Linköpings universitet, Handikappvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121034.

Pełny tekst źródła
Streszczenie:
This thesis employed an integrated approach and investigated intra- and inter-individual differences relevant for normally hearing (NH) and hearing-impaired (HI) adults in native (Swedish) and non-native (English) languages in adverse listening conditions. The integrated approach encompassed the role of cognition as a focal point of interest as well as perceptualauditory and linguistic factors. Paper I examined the extent to which proficiency in a non-native language influenced native and non-native speech perception performance for NH listeners in noise maskers compared to native and non-native speech maskers. Working memory capacity in native and non-native languages and non-verbal intelligence were also assessed. The design of paper II was identical to that of paper I, however the participants in paper II had a hearingimpairment. The purpose of paper III was to assess how NH and HI listeners subjectively evaluated the perceived disturbance from the speech- and noise maskers in the native and nonnative languages. Paper IV examined how well native and non-native stories that were presented unmasked and masked with native and non-native speech were recalled by NH listeners. Paper IV further investigated the role of working memory capacity in the episodic long-term memory of story contents as well as proficiency in native and non-native languages. The results showed that generally, the speech maskers affected performance and perceived disturbance more than the noise maskers did. Regarding the non-native target language, interference from speech maskers in the dominant native language is taxing for speech perception performance, perceived disturbance and memory processes. However, large inter- individual variability between the listeners was observed. Part of this variability relates to non-native language proficiency. Perceptual and cognitive effort may hinder efficient long-term memory encoding, even when stimuli are appropriately identified at a perceptual level. A large working memory capacity (WMC) provides a better ability to suppress distractions and allocate processing resources to meet assigned objectives. The relatively large inter-individual differences in this thesis, require an individualized approach in clinical or educational settings when non-native persons or people with hearing impairment need to perceive and remember potentially vital information. Individua  differences in the very complex process of speech understanding and recall need to be further addressed by future studies. The relevance of cognitive factors and language proficiency provides opportunities for individuals who face difficulties to compensate using other abilities.
Avhandlingens övergripande syfte var att genom ett integrerat tillvägagångssätt undersöka mellan- och inom-individuella skillnader relevanta för normalhörande och hörselskadade vuxna i svenska och engelska språket under ogynnsamma lyssningsförhållanden. Med kognitiva faktorer i fokus, omfattade det integrerade tillvägagångssättet också perceptuella-auditiva och lingvistiska faktorer. Studie I undersökte i vilken utsträckning färdigheter i engelska inverkade på taluppfattning av ett modersmål och ett andra språk som var maskerat med brus jämfört med störande tal på svenska och engelska. Normalhörande vuxna deltog. Arbetsminneskapacitet på svenska och engelska liksom icke-verbal intelligens bedömdes också i studien. Designen i studie II var identisk med designen i studie I, förutom att personer med hörselnedsättning ingick som deltagare. Syftet med studie III var att bedöma hur normalhörande personer och personer med hörselnedsättning subjektivt utvärderade den upplevda störningen från tal- och brus på ett modersmål och ett andra språk. Studie IV undersökte hur väl normalhörande deltagare kom ihåg berättelser på svenska och engelska som presenterades omaskerade eller med störande tal på svenska eller engelska. Studie IV undersökte vidare  arbetsminneskapacitet och episodiskt långtidsminne av berättelsernas innehåll liksom också färdighet i svenska och engelska språket. Resultaten visade att generellt var maskeringseffekten större vid störande tal jämfört med andra bruskällor både vad avser taluppfattning såväl som upplevd störning. Vad det gäller det engelska språket som talsignal, är störning från det svenska modersmålet påfrestande för taluppfattning, upplevd störning såväl som för minnesprocesser. Dock har stor inter- och intra-individuell variation mellan deltagarna observerats. En del av denna variation avser engelska språkfärdigheter. Perceptuell och kognitiv påfrestning kan minska möjligheten till att säkra långsiktiga minnesprocesser även om ett stimuli var korrekt identifierat på en perceptuell nivå. En god arbetsminneskapacitet kan ge en bättre förmåga att undertrycka en distraktion och därmed fördela processresurserna för att nå de uppställda målen. De relativt stora inter-individuella skillnaderna i denna avhandling gör det angeläget med en individualiserad  tillämpning, kliniskt eller inom utbildningsmässiga områden när personer med hörselnedsättning eller personer med ett annat modersmål behöver uppfatta eller minnas potentiellt viktig information. De individuella skillnader som ligger bakom taluppfattning och minnesförmåga behöver utforskas vidare. Goda kognitiva förmågor och språkfärdigheter ger möjligheter för individer som möter svårigheter till att kompensera genom att använda dessa förmågor.
Style APA, Harvard, Vancouver, ISO itp.
50

Wong, Kwong Cheong. "Classifying Conversations". Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC096/document.

Pełny tekst źródła
Streszczenie:
Le fait que toute activité linguistique et toute interactivité soient relatives à un jeu de langage / un genre / un type de conversation a été reconnu depuis longtemps par divers spécialistes. Néanmoins, jusqu'à présent, aucune théorie systématique n'a pas été construite qui peut fournir en une manière formelle la composition et la gamme des types de conversation actuels et possibles. Dans cette thèse, nous adoptons une approche topologique pour classifier les conversations et développons une théorie formelle des types conversationnels dans le cadre de Type Theory with Records. Énoncés non phrastiques (ENP) - des énoncés fragmentaires qui sont des phrases sans prédicat mais qui expriment néanmoins un sens complet dans un contexte donné - sont un phénomène caractéristique de la conversation. Dans cette thèse, nous étudions les ENP dans un corpus chinois et examinons la distribution des ENP dans les genres parlés du Corpus National britannique (BNC). Cette thèse aborde le sujet de l'élaboration d'une théorie des types conversationnels qui peut expliquer la résolution des énoncés non phrastiques à travers des différents types conversationnels. En revanche, nous testerons l’hypothèse que la variation entre des distributions des énoncés non phrastiques peut servir à structurer l’espace des types conversationnels
That all linguistic activity and interactivity is relative to a domain/language-game /genre/conversational type has long been acknowledged by various scholars. Nonetheless, hitherto there has been no systematic theory that tries to propose a formal account regarding the make-up and range of actual and possible conversational types. In this thesis, we take a topological approach to classifying conversations and develop a formal theory of conversational types in the framework of Type Theory with Records (TTR). Non-sentential utterances (NSUs)—fragmentary utterances which are incomplete sentences but nevertheless convey a complete sentential meaning in the given context—are a characteristic phenomenon of conversation. In this thesis, we study NSUs in a Chinese corpus and investigate the distribution of NSUs across the spoken genres of the British National Corpus (BNC).This thesis tackles the topic of developing a theory of conversational types that can be used to explicate the resolution of NSUs across different conversational types. Conversely, we investigate whether the variation in the distribution of NSUs can serve as a means of structuring the space of conversational types
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii