Auswahl der wissenschaftlichen Literatur zum Thema „Lipreading“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Lipreading" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Lipreading"

1

Lynch, Michael P., Rebecca E. Eilers, D. Kimbrough Oller, Richard C. Urbano und Patricia J. Pero. „Multisensory Narrative Tracking by a Profoundly Deaf Subject Using an Electrocutaneous Vocoder and a Vibrotactile Aid“. Journal of Speech, Language, and Hearing Research 32, Nr. 2 (Juni 1989): 331–38. http://dx.doi.org/10.1044/jshr.3202.331.

Der volle Inhalt der Quelle
Annotation:
A congenitally, profoundly deaf adult who had received 41 hours of tactual word recognition training in a previous study was assessed in tracking of connected discourse. This assessment was conducted in three phases. In the first phase, the subject used the Tacticon 1600 electrocutaneous vocoder to track a narrative in three conditions: (a) lipreading and aided hearing (L+H), (b) lipreading and tactual vocoder (L+TV), and (c) lipreading, tactual vocoder, and aided hearing (L+TV+H), Subject performance was significantly better in the L+TV+H condition than in the L+H condition, suggesting that the subject benefitted from the additional information provided by the tactual vocoder. In the second phase, the Tactaid II vibrotactile aid was used in three conditions: (a) lipreading alone, (b) lipreading and tactual aid (L+TA), and (c) lipreading, tactual aid, and aided hearing (L+TA+H). The subject was able to combine cues from the Tactaid II with those from lipreading and aided hearing. In the third phase, both tactual devices were used in six conditions: (a) lipreading alone (L), (b) lipreading and aided hearing (L+H), (c) lipreading and Tactaid II (L+TA), (d) lipreading and Tacticon 1600 (L+TV), (e) lipreading, Tactaid II, and aided hearing (L+TA+H), and (f) lipreading, Tacticon 1600, and aided hearing (L+TV+H). In this phase, only the Tactaid II significantly improved tracking performance over lipreading and aided hearing. Overall, improvement in tracking performance occurred within and across phases of this study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Tye-Murray, Nancy, Sandra Hale, Brent Spehar, Joel Myerson und Mitchell S. Sommers. „Lipreading in School-Age Children: The Roles of Age, Hearing Status, and Cognitive Ability“. Journal of Speech, Language, and Hearing Research 57, Nr. 2 (April 2014): 556–65. http://dx.doi.org/10.1044/2013_jslhr-h-12-0273.

Der volle Inhalt der Quelle
Annotation:
Purpose The study addressed three research questions: Does lipreading improve between the ages of 7 and 14 years? Does hearing loss affect the development of lipreading? How do individual differences in lipreading relate to other abilities? Method Forty children with normal hearing (NH) and 24 with hearing loss (HL) were tested using 4 lipreading instruments plus measures of perceptual, cognitive, and linguistic abilities. Results For both groups, lipreading performance improved with age on all 4 measures of lipreading, with the HL group performing better than the NH group. Scores from the 4 measures loaded strongly on a single principal component. Only age, hearing status, and visuospatial working memory were significant predictors of lipreading performance. Conclusions Results showed that children's lipreading ability is not fixed but rather improves between 7 and 14 years of age. The finding that children with HL lipread better than those with NH suggests experience plays an important role in the development of this ability. In addition to age and hearing status, visuospatial working memory predicts lipreading performance in children, just as it does in adults. Future research on the developmental time-course of lipreading could permit interventions and pedagogies to be targeted at periods in which improvement is most likely to occur.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hawes, Nancy A. „Lipreading for Children: A Synthetic Approach to Lipreading“. Ear and Hearing 9, Nr. 6 (Dezember 1988): 356. http://dx.doi.org/10.1097/00003446-198812000-00018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Paulesu, E., D. Perani, V. Blasi, G. Silani, N. A. Borghese, U. De Giovanni, S. Sensolo und F. Fazio. „A Functional-Anatomical Model for Lipreading“. Journal of Neurophysiology 90, Nr. 3 (September 2003): 2005–13. http://dx.doi.org/10.1152/jn.00926.2002.

Der volle Inhalt der Quelle
Annotation:
Regional cerebral blood flow (rCBF) PET scans were used to study the physiological bases of lipreading, a natural skill of extracting language from mouth movements, which contributes to speech perception in everyday life. Viewing connected mouth movements that could not be lexically identified and that evoke perception of isolated speech sounds (nonlexical lipreading) was associated with bilateral activation of the auditory association cortex around Wernicke's area, of left dorsal premotor cortex, and left opercular-premotor division of the left inferior frontal gyrus (Broca's area). The supplementary motor area was active as well. These areas have all been implicated in phonological processing, speech and mouth motor planning, and execution. In addition, nonlexical lipreading also differentially activated visual motion areas. Lexical access through lipreading was associated with a similar pattern of activation and with additional foci in ventral- and dorsolateral prefrontal cortex bilaterally and in left inferior parietal cortex. Linear regression analysis of cerebral blood flow and proficiency for lexical lipreading further clarified the role of these areas in gaining access to language through lipreading. The results suggest cortical activation circuits for lipreading from action representations that may differentiate lexical access from nonlexical processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Heikkilä, Jenni, Eila Lonka, Sanna Ahola, Auli Meronen und Kaisa Tiippana. „Lipreading Ability and Its Cognitive Correlates in Typically Developing Children and Children With Specific Language Impairment“. Journal of Speech, Language, and Hearing Research 60, Nr. 3 (März 2017): 485–93. http://dx.doi.org/10.1044/2016_jslhr-s-15-0071.

Der volle Inhalt der Quelle
Annotation:
PurposeLipreading and its cognitive correlates were studied in school-age children with typical language development and delayed language development due to specific language impairment (SLI).MethodForty-two children with typical language development and 20 children with SLI were tested by using a word-level lipreading test and an extensive battery of standardized cognitive and linguistic tests.ResultsChildren with SLI were poorer lipreaders than their typically developing peers. Good phonological skills were associated with skilled lipreading in both typically developing children and in children with SLI. Lipreading was also found to correlate with several cognitive skills, for example, short-term memory capacity and verbal motor skills.ConclusionsSpeech processing deficits in SLI extend also to the perception of visual speech. Lipreading performance was associated with phonological skills. Poor lipreading in children with SLI may be, thus, related to problems in phonological processing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ortiz, Isabel de los Reyes Rodríguez. „Lipreading in the Prelingually Deaf: What makes a Skilled Speechreader?“ Spanish Journal of Psychology 11, Nr. 2 (November 2008): 488–502. http://dx.doi.org/10.1017/s1138741600004492.

Der volle Inhalt der Quelle
Annotation:
Lipreading proficiency was investigated in a group of hearing-impaired people, all of them knowing Spanish Sign Language (SSL). The aim of this study was to establish the relationships between lipreading and some other variables (gender, intelligence, audiological variables, participants' education, parents' education, communication practices, intelligibility, use of SSL). The 32 participants were between 14 and 47 years of age. They all had sensorineural hearing losses (from severe to profound). The lipreading procedures comprised identification of words in isolation. The words selected for presentation in isolation were spoken by the same talker. Identification of words required participants to select their responses from set of four pictures appropriately labelled. Lipreading was significantly correlated with intelligence and intelligibility. Multiple regression analyses were used to obtain a prediction equation for the lipreading measures. As a result of this procedure, it is concluded that proficient deaf lipreaders are more intelligent and their oral speech was more comprehensible for others.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Plant, Geoff, Johan Gnosspelius und Harry Levitt. „The Use of Tactile Supplements in Lipreading Swedish and English“. Journal of Speech, Language, and Hearing Research 43, Nr. 1 (Februar 2000): 172–83. http://dx.doi.org/10.1044/jslhr.4301.172.

Der volle Inhalt der Quelle
Annotation:
The speech perception skills of GS, a Swedish adult deaf man who has used a "natural" tactile supplement to lipreading for over 45 years, were tested in two languages: Swedish and English. Two different tactile supplements to lipreading were investigated. In the first, "Tactiling," GS detected the vibrations accompanying speech by placing his thumb directly on the speaker’s throat. In the second, a simple tactile aid consisting of a throat microphone, amplifier, and a hand-held bone vibrator was used. Both supplements led to improved lipreading of materials ranging in complexity from consonants in [aCa] nonsense syllables to Speech Tracking. Analysis of GS’s results indicated that the tactile signal assisted him in identifying vowel duration, consonant voicing, and some manner of articulation categories. GS’s tracking rate in Swedish was around 40 words per minute when the materials were presented via lipreading alone. When the lipreading signal was supplemented by tactile cues, his tracking rates were in the range of 60–65 words per minute. Although GS’s tracking rates for English materials were around half those achieved in Swedish, his performance showed a similar pattern in that the use of tactile cues led to improvements of around 40% over lipreading alone.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Suess, Nina, Anne Hauswald, Verena Zehentner, Jessica Depireux, Gudrun Herzog, Sebastian Rösch und Nathan Weisz. „Influence of linguistic properties and hearing impairment on visual speech perception skills in the German language“. PLOS ONE 17, Nr. 9 (30.09.2022): e0275585. http://dx.doi.org/10.1371/journal.pone.0275585.

Der volle Inhalt der Quelle
Annotation:
Visual input is crucial for understanding speech under noisy conditions, but there are hardly any tools to assess the individual ability to lipread. With this study, we wanted to (1) investigate how linguistic characteristics of language on the one hand and hearing impairment on the other hand have an impact on lipreading abilities and (2) provide a tool to assess lipreading abilities for German speakers. 170 participants (22 prelingually deaf) completed the online assessment, which consisted of a subjective hearing impairment scale and silent videos in which different item categories (numbers, words, and sentences) were spoken. The task for our participants was to recognize the spoken stimuli just by visual inspection. We used different versions of one test and investigated the impact of item categories, word frequency in the spoken language, articulation, sentence frequency in the spoken language, sentence length, and differences between speakers on the recognition score. We found an effect of item categories, articulation, sentence frequency, and sentence length on the recognition score. With respect to hearing impairment we found that higher subjective hearing impairment is associated with higher test score. We did not find any evidence that prelingually deaf individuals show enhanced lipreading skills over people with postlingual acquired hearing impairment. However, we see an interaction with education only in the prelingual deaf, but not in the population with postlingual acquired hearing loss. This points to the fact that there are different factors contributing to enhanced lipreading abilities depending on the onset of hearing impairment (prelingual vs. postlingual). Overall, lipreading skills vary strongly in the general population independent of hearing impairment. Based on our findings we constructed a new and efficient lipreading assessment tool (SaLT) that can be used to test behavioral lipreading abilities in the German speaking population.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhang, Tao, Lun He, Xudong Li und Guoqing Feng. „Efficient End-to-End Sentence-Level Lipreading with Temporal Convolutional Networks“. Applied Sciences 11, Nr. 15 (29.07.2021): 6975. http://dx.doi.org/10.3390/app11156975.

Der volle Inhalt der Quelle
Annotation:
Lipreading aims to recognize sentences being spoken by a talking face. In recent years, the lipreading method has achieved a high level of accuracy on large datasets and made breakthrough progress. However, lipreading is still far from being solved, and existing methods tend to have high error rates on the wild data and have the defects of disappearing training gradient and slow convergence. To overcome these problems, we proposed an efficient end-to-end sentence-level lipreading model, using an encoder based on a 3D convolutional network, ResNet50, Temporal Convolutional Network (TCN), and a CTC objective function as the decoder. More importantly, the proposed architecture incorporates TCN as a feature learner to decode feature. It can partly eliminate the defects of RNN (LSTM, GRU) gradient disappearance and insufficient performance, and this yields notable performance improvement as well as faster convergence. Experiments show that the training and convergence speed are 50% faster than the state-of-the-art method, and improved accuracy by 2.4% on the GRID dataset.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kumar, Yaman, Rohit Jain, Khwaja Mohd Salik, Rajiv Ratn Shah, Yifang Yin und Roger Zimmermann. „Lipper: Synthesizing Thy Speech Using Multi-View Lipreading“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 2588–95. http://dx.doi.org/10.1609/aaai.v33i01.33012588.

Der volle Inhalt der Quelle
Annotation:
Lipreading has a lot of potential applications such as in the domain of surveillance and video conferencing. Despite this, most of the work in building lipreading systems has been limited to classifying silent videos into classes representing text phrases. However, there are multiple problems associated with making lipreading a text-based classification task like its dependence on a particular language and vocabulary mapping. Thus, in this paper we propose a multi-view lipreading to audio system, namely Lipper, which models it as a regression task. The model takes silent videos as input and produces speech as the output. With multi-view silent videos, we observe an improvement over single-view speech reconstruction results. We show this by presenting an exhaustive set of experiments for speaker-dependent, out-of-vocabulary and speaker-independent settings. Further, we compare the delay values of Lipper with other speechreading systems in order to show the real-time nature of audio produced. We also perform a user study for the audios produced in order to understand the level of comprehensibility of audios produced using Lipper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Lipreading"

1

Lucey, Patrick Joseph. „Lipreading across multiple views“. Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16676/1/Patrick_Joseph_Lucey_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
Visual information from a speaker's mouth region is known to improve automatic speech recognition (ASR) robustness, especially in the presence of acoustic noise. Currently, the vast majority of audio-visual ASR (AVASR) studies assume frontal images of the speaker's face, which is a rather restrictive human-computer interaction (HCI) scenario. The lack of research into AVASR across multiple views has been dictated by the lack of large corpora that contains varying pose/viewpoint speech data. Recently, research has concentrated on recognising human be- haviours within "meeting " or "lecture " type scenarios via "smart-rooms ". This has resulted in the collection of audio-visual speech data which allows for the recognition of visual speech from both frontal and non-frontal views to occur. Using this data, the main focus of this thesis was to investigate and develop various methods within the confines of a lipreading system which can recognise visual speech across multiple views. This reseach constitutes the first published work within the field which looks at this particular aspect of AVASR. The task of recognising visual speech from non-frontal views (i.e. profile) is in principle very similar to that of frontal views, requiring the lipreading system to initially locate and track the mouth region and subsequently extract visual features. However, this task is far more complicated than the frontal case, because the facial features required to locate and track the mouth lie in a much more limited spatial plane. Nevertheless, accurate mouth region tracking can be achieved by employing techniques similar to frontal facial feature localisation. Once the mouth region has been extracted, the same visual feature extraction process can take place to the frontal view. A novel contribution of this thesis, is to quantify the degradation in lipreading performance between the frontal and profile views. In addition to this, novel patch-based analysis of the various views is conducted, and as a result a novel multi-stream patch-based representation is formulated. Having a lipreading system which can recognise visual speech from both frontal and profile views is a novel contribution to the field of AVASR. How- ever, given both the frontal and profile viewpoints, this begs the question, is there any benefit of having the additional viewpoint? Another major contribution of this thesis, is an exploration of a novel multi-view lipreading system. This system shows that there does exist complimentary information in the additional viewpoint (possibly that of lip protrusion), with superior performance achieved in the multi-view system compared to the frontal-only system. Even though having a multi-view lipreading system which can recognise visual speech from both front and profile views is very beneficial, it can hardly considered to be realistic, as each particular viewpoint is dedicated to a single pose (i.e. front or profile). In an effort to make the lipreading system more realistic, a unified system based on a single camera was developed which enables a lipreading system to recognise visual speech from both frontal and profile poses. This is called pose-invariant lipreading. Pose-invariant lipreading can be performed on either stationary or continuous tasks. Methods which effectively normalise the various poses into a single pose were investigated for the stationary scenario and in another contribution of this thesis, an algorithm based on regularised linear regression was employed to project all the visual speech features into a uniform pose. This particular method is shown to be beneficial when the lipreading system was biased towards the dominant pose (i.e. frontal). The final contribution of this thesis is the formulation of a continuous pose-invariant lipreading system which contains a pose-estimator at the start of the visual front-end. This system highlights the complexity of developing such a system, as introducing more flexibility within the lipreading system invariability means the introduction of more error. All the works contained in this thesis present novel and innovative contributions to the field of AVASR, and hopefully this will aid in the future deployment of an AVASR system in realistic scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lucey, Patrick Joseph. „Lipreading across multiple views“. Queensland University of Technology, 2007. http://eprints.qut.edu.au/16676/.

Der volle Inhalt der Quelle
Annotation:
Visual information from a speaker's mouth region is known to improve automatic speech recognition (ASR) robustness, especially in the presence of acoustic noise. Currently, the vast majority of audio-visual ASR (AVASR) studies assume frontal images of the speaker's face, which is a rather restrictive human-computer interaction (HCI) scenario. The lack of research into AVASR across multiple views has been dictated by the lack of large corpora that contains varying pose/viewpoint speech data. Recently, research has concentrated on recognising human be- haviours within "meeting " or "lecture " type scenarios via "smart-rooms ". This has resulted in the collection of audio-visual speech data which allows for the recognition of visual speech from both frontal and non-frontal views to occur. Using this data, the main focus of this thesis was to investigate and develop various methods within the confines of a lipreading system which can recognise visual speech across multiple views. This reseach constitutes the first published work within the field which looks at this particular aspect of AVASR. The task of recognising visual speech from non-frontal views (i.e. profile) is in principle very similar to that of frontal views, requiring the lipreading system to initially locate and track the mouth region and subsequently extract visual features. However, this task is far more complicated than the frontal case, because the facial features required to locate and track the mouth lie in a much more limited spatial plane. Nevertheless, accurate mouth region tracking can be achieved by employing techniques similar to frontal facial feature localisation. Once the mouth region has been extracted, the same visual feature extraction process can take place to the frontal view. A novel contribution of this thesis, is to quantify the degradation in lipreading performance between the frontal and profile views. In addition to this, novel patch-based analysis of the various views is conducted, and as a result a novel multi-stream patch-based representation is formulated. Having a lipreading system which can recognise visual speech from both frontal and profile views is a novel contribution to the field of AVASR. How- ever, given both the frontal and profile viewpoints, this begs the question, is there any benefit of having the additional viewpoint? Another major contribution of this thesis, is an exploration of a novel multi-view lipreading system. This system shows that there does exist complimentary information in the additional viewpoint (possibly that of lip protrusion), with superior performance achieved in the multi-view system compared to the frontal-only system. Even though having a multi-view lipreading system which can recognise visual speech from both front and profile views is very beneficial, it can hardly considered to be realistic, as each particular viewpoint is dedicated to a single pose (i.e. front or profile). In an effort to make the lipreading system more realistic, a unified system based on a single camera was developed which enables a lipreading system to recognise visual speech from both frontal and profile poses. This is called pose-invariant lipreading. Pose-invariant lipreading can be performed on either stationary or continuous tasks. Methods which effectively normalise the various poses into a single pose were investigated for the stationary scenario and in another contribution of this thesis, an algorithm based on regularised linear regression was employed to project all the visual speech features into a uniform pose. This particular method is shown to be beneficial when the lipreading system was biased towards the dominant pose (i.e. frontal). The final contribution of this thesis is the formulation of a continuous pose-invariant lipreading system which contains a pose-estimator at the start of the visual front-end. This system highlights the complexity of developing such a system, as introducing more flexibility within the lipreading system invariability means the introduction of more error. All the works contained in this thesis present novel and innovative contributions to the field of AVASR, and hopefully this will aid in the future deployment of an AVASR system in realistic scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

MacLeod, A. „Effective methods for measuring lipreading skills“. Thesis, University of Nottingham, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233400.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

MacDermid, Catriona. „Lipreading and language processing by deaf children“. Thesis, University of Surrey, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.291020.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yuan, Hanfeng 1972. „Tactual display of consonant voicing to supplement lipreading“. Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87906.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 241-251).
This research is concerned with the development of tactual displays to supplement the information available through lipreading. Because voicing carries a high informational load in speech and is not well transmitted through lipreading, the efforts are focused on providing tactual displays of voicing to supplement the information available on the lips of the talker. This research includes exploration of 1) signal-processing schemes to extract information about voicing from the acoustic speech signal, 2) methods of displaying this information through a multi-finger tactual display, and 3) perceptual evaluations of voicing reception through the tactual display alone (T), lipreading alone (L), and the combined condition (L+T). Signal processing for the extraction of voicing information used amplitude-envelope signals derived from filtered bands of speech (i.e., envelopes derived from a lowpass-filtered band at 350 Hz and from a highpass-filtered band at 3000 Hz). Acoustic measurements made on the envelope signals of a set of 16 initial consonants represented through multiple tokens of C₁VC₂ syllables indicate that the onset-timing difference between the low- and high-frequency envelopes (EOA: envelope-onset asynchrony) provides a reliable and robust cue for distinguishing voiced from voiceless consonants. This acoustic cue was presented through a two-finger tactual display such that the envelope of the high-frequency band was used to modulate a 250-Hz carrier signal delivered to the index finger (250-I) and the envelope of the low-frequency band was used to modulate a 50-Hz carrier delivered to the thumb (50T).
(cont.) The temporal-onset order threshold for these two signals, measured with roving signal amplitude and duration, averaged 34 msec, sufficiently small for use of the EOA cue. Perceptual evaluations of the tactual display of EOA with speech signal indicated: 1) that the cue was highly effective for discrimination of pairs of voicing contrasts; 2) that the identification of 16 consonants was improved by roughly 15 percentage points with the addition of the tactual cue over L alone; and 3) that no improvements in L+T over L were observed for reception of words in sentences, indicating the need for further training on this task.
by Hanfeng Yuan.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chiou, Greg I. „Active contour models for distinct feature tracking and lipreading /“. Thesis, Connect to this title online; UW restricted, 1995. http://hdl.handle.net/1773/6023.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kaucic, Robert August. „Lip tracking for audio-visual speech recognition“. Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360392.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Matthews, Iain. „Features for audio-visual speech recognition“. Thesis, University of East Anglia, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266736.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Thangthai, Kwanchiva. „Computer lipreading via hybrid deep neural network hidden Markov models“. Thesis, University of East Anglia, 2018. https://ueaeprints.uea.ac.uk/69215/.

Der volle Inhalt der Quelle
Annotation:
Constructing a viable lipreading system is a challenge because it is claimed that only 30% of information of speech production is visible on the lips. Nevertheless, in small vocabulary tasks, there have been several reports of high accuracies. However, investigation of larger vocabulary tasks is rare. This work examines constructing a large vocabulary lipreading system using an approach based-on Deep Neural Network Hidden Markov Models (DNN-HMMs). We present the historical development of computer lipreading technology and the state-ofthe-art results in small and large vocabulary tasks. In preliminary experiments, we evaluate the performance of lipreading and audiovisual speech recognition in small vocabulary data sets. We then concentrate on the improvement of lipreading systems in a more substantial vocabulary size with a multi-speaker data set. We tackle the problem of lipreading an unseen speaker. We investigate the effect of employing several stepstopre-processvisualfeatures. Moreover, weexaminethecontributionoflanguage modelling in a lipreading system where we use longer n-grams to recognise visual speech. Our lipreading system is constructed on the 6000-word vocabulary TCDTIMIT audiovisual speech corpus. The results show that visual-only speech recognition can definitely reach about 60% word accuracy on large vocabularies. We actually achieved a mean of 59.42% measured via three-fold cross-validation on the speaker independent setting of the TCD-TIMIT corpus using Deep autoencoder features and DNN-HMM models. This is the best word accuracy of a lipreading system in a large vocabulary task reported on the TCD-TIMIT corpus. In the final part of the thesis, we examine how the DNN-HMM model improves lipreading performance. We also give an insight into lipreading by providing a feature visualisation. Finally, we present an analysis of lipreading results and suggestions for future development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hiramatsu, Sandra. „Does lipreading help word reading? : an investigation of the relationship between visible speech and early reading achievement /“. Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/7913.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Lipreading"

1

Woods, John Chaloner. Lipreading: A guide for beginners. London: Royal National Institute for the Deaf, 1991.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Erickson, Joan Good. Speech reading: An aid to communication. 2. Aufl. Danville, Ill: Interstate Printers & Publishers, 1989.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chaloner, Woods John, Hrsg. Watch this face: A practical guide to lipreading. London: Royal National Institute for Deaf People, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dupret, Jean-Pierre. Stratégies visuelles dans la lecture labiale. Hamburg: H. Buske, 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Martin, Christine. Speech perception: Writing functional material for lipreading classes. [S.l]: [s.n.], 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Beeching, David. Take another pick: A selection of lipreading exercises. [Stoke on Trent]: [ATLA], 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Nitchie, Edward Bartlett. Lip reading made easy. Port Townsend, Wash: Breakout Productions, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Nitchie, Edward Bartlett. Lip reading made easy. Port Townsend, Wash: Loompanics, 1985.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Marcus, Irving S. Your eyes hear for you: A self-help course in speechreading. Bethesda, MD: Self Help for Hard of Hearing People, 1985.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Carter, Betty Woerner. I can't hear you in the dark: How to learn and teach lipreading. Springfield, Ill., U.S.A: Charles C. Thomas, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Lipreading"

1

Hlaváč, Miroslav, Ivan Gruber, Miloš Železný und Alexey Karpov. „Lipreading with LipsID“. In Speech and Computer, 176–83. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60276-5_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bregler, Christoph, und Stephen M. Omohundro. „Learning Visual Models for Lipreading“. In Computational Imaging and Vision, 301–20. Dordrecht: Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-015-8935-2_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Paleček, Karel. „Spatiotemporal Convolutional Features for Lipreading“. In Text, Speech, and Dialogue, 438–46. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64206-2_49.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Séguier, Renaud, und Nicolas Cladel. „Genetic Snakes: Application on Lipreading“. In Artificial Neural Nets and Genetic Algorithms, 229–33. Vienna: Springer Vienna, 2003. http://dx.doi.org/10.1007/978-3-7091-0646-4_41.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Visser, Michiel, Mannes Poel und Anton Nijholt. „Classifying Visemes for Automatic Lipreading“. In Text, Speech and Dialogue, 349–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48239-3_65.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yu, Keren, Xiaoyi Jiang und Horst Bunke. „Lipreading using Fourier transform over time“. In Computer Analysis of Images and Patterns, 472–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63460-6_152.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Singh, Preety, Vijay Laxmi, Deepika Gupta und M. S. Gaur. „Lipreading Using n–Gram Feature Vector“. In Advances in Intelligent and Soft Computing, 81–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16626-6_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Owczarek, Agnieszka, und Krzysztof Ślot. „Lipreading Procedure Based on Dynamic Programming“. In Artificial Intelligence and Soft Computing, 559–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29347-4_65.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Goldschen, Alan J., Oscar N. Garcia und Eric D. Petajan. „Continuous Automatic Speech Recognition by Lipreading“. In Computational Imaging and Vision, 321–43. Dordrecht: Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-015-8935-2_14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tsunekawa, Takuya, Kazuhiro Hotta und Haruhisa Takahashi. „Lipreading Using Recurrent Neural Prediction Model“. In Lecture Notes in Computer Science, 405–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30126-4_50.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Lipreading"

1

Yavuz, Zafer, und Vasif V. Nabiyev. „Automatic Lipreading“. In 2007 IEEE 15th Signal Processing and Communications Applications. IEEE, 2007. http://dx.doi.org/10.1109/siu.2007.4298783.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Gao, Wen, Jiyong Ma, Rui Wang und Hongxun Yao. „Towards robust lipreading“. In 6th International Conference on Spoken Language Processing (ICSLP 2000). ISCA: ISCA, 2000. http://dx.doi.org/10.21437/icslp.2000-467.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Mase, Kenji, und Alex Pentland. „Lip Reading: Automatic Visual Recognition of Spoken Words“. In Image Understanding and Machine Vision. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/iumv.1989.wc1.

Der volle Inhalt der Quelle
Annotation:
Lipreading is an rich source of speech information, and in noisy environments it can even be the primary source of information. In day-to-day situations lipreading is important because it provides a source of information that is largely independent of auditory signal, so that auditory and lipreading information can be combined to produce more accurate and robust speech recognition. For instance, the nasal sounds ‘n’, ‘m’, and ‘ng’ are quite difficult to distinguish acoustically, but have very different visual appearance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Burton, Jake, David Frank, Mahdi Saleh, Nassir Navab und Helen L. Bear. „The speaker-independent lipreading play-off; a survey of lipreading machines“. In 2018 IEEE International Conference on Image Processing, Applications and Systems (IPAS). IEEE, 2018. http://dx.doi.org/10.1109/ipas.2018.8708874.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lucey, Patrick, Sridha Sridharan und David Dean. „Continuous pose-invariant lipreading“. In Interspeech 2008. ISCA: ISCA, 2008. http://dx.doi.org/10.21437/interspeech.2008-664.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Luettin, Juergen, Neil A. Thacker und Steve W. Beet. „Speaker identification by lipreading“. In 4th International Conference on Spoken Language Processing (ICSLP 1996). ISCA: ISCA, 1996. http://dx.doi.org/10.21437/icslp.1996-16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhou, Ziheng, Guoying Zhao und Matti Pietikainen. „Lipreading: A Graph Embedding Approach“. In 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.133.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Martinez, Brais, Pingchuan Ma, Stavros Petridis und Maja Pantic. „Lipreading Using Temporal Convolutional Networks“. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053841.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ong, Eng-Jon, und Richard Bowden. „Learning Sequential Patterns for Lipreading“. In British Machine Vision Conference 2011. British Machine Vision Association, 2011. http://dx.doi.org/10.5244/c.25.55.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Noda, Kuniaki, Yuki Yamaguchi, Kazuhiro Nakadai, Hiroshi G. Okuno und Tetsuya Ogata. „Lipreading using convolutional neural network“. In Interspeech 2014. ISCA: ISCA, 2014. http://dx.doi.org/10.21437/interspeech.2014-293.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie