Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Lipreading“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Lipreading" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Lipreading"
Lynch, Michael P., Rebecca E. Eilers, D. Kimbrough Oller, Richard C. Urbano und Patricia J. Pero. „Multisensory Narrative Tracking by a Profoundly Deaf Subject Using an Electrocutaneous Vocoder and a Vibrotactile Aid“. Journal of Speech, Language, and Hearing Research 32, Nr. 2 (Juni 1989): 331–38. http://dx.doi.org/10.1044/jshr.3202.331.
Der volle Inhalt der QuelleTye-Murray, Nancy, Sandra Hale, Brent Spehar, Joel Myerson und Mitchell S. Sommers. „Lipreading in School-Age Children: The Roles of Age, Hearing Status, and Cognitive Ability“. Journal of Speech, Language, and Hearing Research 57, Nr. 2 (April 2014): 556–65. http://dx.doi.org/10.1044/2013_jslhr-h-12-0273.
Der volle Inhalt der QuelleHawes, Nancy A. „Lipreading for Children: A Synthetic Approach to Lipreading“. Ear and Hearing 9, Nr. 6 (Dezember 1988): 356. http://dx.doi.org/10.1097/00003446-198812000-00018.
Der volle Inhalt der QuellePaulesu, E., D. Perani, V. Blasi, G. Silani, N. A. Borghese, U. De Giovanni, S. Sensolo und F. Fazio. „A Functional-Anatomical Model for Lipreading“. Journal of Neurophysiology 90, Nr. 3 (September 2003): 2005–13. http://dx.doi.org/10.1152/jn.00926.2002.
Der volle Inhalt der QuelleHeikkilä, Jenni, Eila Lonka, Sanna Ahola, Auli Meronen und Kaisa Tiippana. „Lipreading Ability and Its Cognitive Correlates in Typically Developing Children and Children With Specific Language Impairment“. Journal of Speech, Language, and Hearing Research 60, Nr. 3 (März 2017): 485–93. http://dx.doi.org/10.1044/2016_jslhr-s-15-0071.
Der volle Inhalt der QuelleOrtiz, Isabel de los Reyes Rodríguez. „Lipreading in the Prelingually Deaf: What makes a Skilled Speechreader?“ Spanish Journal of Psychology 11, Nr. 2 (November 2008): 488–502. http://dx.doi.org/10.1017/s1138741600004492.
Der volle Inhalt der QuellePlant, Geoff, Johan Gnosspelius und Harry Levitt. „The Use of Tactile Supplements in Lipreading Swedish and English“. Journal of Speech, Language, and Hearing Research 43, Nr. 1 (Februar 2000): 172–83. http://dx.doi.org/10.1044/jslhr.4301.172.
Der volle Inhalt der QuelleSuess, Nina, Anne Hauswald, Verena Zehentner, Jessica Depireux, Gudrun Herzog, Sebastian Rösch und Nathan Weisz. „Influence of linguistic properties and hearing impairment on visual speech perception skills in the German language“. PLOS ONE 17, Nr. 9 (30.09.2022): e0275585. http://dx.doi.org/10.1371/journal.pone.0275585.
Der volle Inhalt der QuelleZhang, Tao, Lun He, Xudong Li und Guoqing Feng. „Efficient End-to-End Sentence-Level Lipreading with Temporal Convolutional Networks“. Applied Sciences 11, Nr. 15 (29.07.2021): 6975. http://dx.doi.org/10.3390/app11156975.
Der volle Inhalt der QuelleKumar, Yaman, Rohit Jain, Khwaja Mohd Salik, Rajiv Ratn Shah, Yifang Yin und Roger Zimmermann. „Lipper: Synthesizing Thy Speech Using Multi-View Lipreading“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 2588–95. http://dx.doi.org/10.1609/aaai.v33i01.33012588.
Der volle Inhalt der QuelleDissertationen zum Thema "Lipreading"
Lucey, Patrick Joseph. „Lipreading across multiple views“. Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16676/1/Patrick_Joseph_Lucey_Thesis.pdf.
Der volle Inhalt der QuelleLucey, Patrick Joseph. „Lipreading across multiple views“. Queensland University of Technology, 2007. http://eprints.qut.edu.au/16676/.
Der volle Inhalt der QuelleMacLeod, A. „Effective methods for measuring lipreading skills“. Thesis, University of Nottingham, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233400.
Der volle Inhalt der QuelleMacDermid, Catriona. „Lipreading and language processing by deaf children“. Thesis, University of Surrey, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.291020.
Der volle Inhalt der QuelleYuan, Hanfeng 1972. „Tactual display of consonant voicing to supplement lipreading“. Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87906.
Der volle Inhalt der QuelleIncludes bibliographical references (p. 241-251).
This research is concerned with the development of tactual displays to supplement the information available through lipreading. Because voicing carries a high informational load in speech and is not well transmitted through lipreading, the efforts are focused on providing tactual displays of voicing to supplement the information available on the lips of the talker. This research includes exploration of 1) signal-processing schemes to extract information about voicing from the acoustic speech signal, 2) methods of displaying this information through a multi-finger tactual display, and 3) perceptual evaluations of voicing reception through the tactual display alone (T), lipreading alone (L), and the combined condition (L+T). Signal processing for the extraction of voicing information used amplitude-envelope signals derived from filtered bands of speech (i.e., envelopes derived from a lowpass-filtered band at 350 Hz and from a highpass-filtered band at 3000 Hz). Acoustic measurements made on the envelope signals of a set of 16 initial consonants represented through multiple tokens of C₁VC₂ syllables indicate that the onset-timing difference between the low- and high-frequency envelopes (EOA: envelope-onset asynchrony) provides a reliable and robust cue for distinguishing voiced from voiceless consonants. This acoustic cue was presented through a two-finger tactual display such that the envelope of the high-frequency band was used to modulate a 250-Hz carrier signal delivered to the index finger (250-I) and the envelope of the low-frequency band was used to modulate a 50-Hz carrier delivered to the thumb (50T).
(cont.) The temporal-onset order threshold for these two signals, measured with roving signal amplitude and duration, averaged 34 msec, sufficiently small for use of the EOA cue. Perceptual evaluations of the tactual display of EOA with speech signal indicated: 1) that the cue was highly effective for discrimination of pairs of voicing contrasts; 2) that the identification of 16 consonants was improved by roughly 15 percentage points with the addition of the tactual cue over L alone; and 3) that no improvements in L+T over L were observed for reception of words in sentences, indicating the need for further training on this task.
by Hanfeng Yuan.
Ph.D.
Chiou, Greg I. „Active contour models for distinct feature tracking and lipreading /“. Thesis, Connect to this title online; UW restricted, 1995. http://hdl.handle.net/1773/6023.
Der volle Inhalt der QuelleKaucic, Robert August. „Lip tracking for audio-visual speech recognition“. Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360392.
Der volle Inhalt der QuelleMatthews, Iain. „Features for audio-visual speech recognition“. Thesis, University of East Anglia, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266736.
Der volle Inhalt der QuelleThangthai, Kwanchiva. „Computer lipreading via hybrid deep neural network hidden Markov models“. Thesis, University of East Anglia, 2018. https://ueaeprints.uea.ac.uk/69215/.
Der volle Inhalt der QuelleHiramatsu, Sandra. „Does lipreading help word reading? : an investigation of the relationship between visible speech and early reading achievement /“. Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/7913.
Der volle Inhalt der QuelleBücher zum Thema "Lipreading"
Woods, John Chaloner. Lipreading: A guide for beginners. London: Royal National Institute for the Deaf, 1991.
Den vollen Inhalt der Quelle findenErickson, Joan Good. Speech reading: An aid to communication. 2. Aufl. Danville, Ill: Interstate Printers & Publishers, 1989.
Den vollen Inhalt der Quelle findenChaloner, Woods John, Hrsg. Watch this face: A practical guide to lipreading. London: Royal National Institute for Deaf People, 2003.
Den vollen Inhalt der Quelle findenDupret, Jean-Pierre. Stratégies visuelles dans la lecture labiale. Hamburg: H. Buske, 1986.
Den vollen Inhalt der Quelle findenMartin, Christine. Speech perception: Writing functional material for lipreading classes. [S.l]: [s.n.], 1995.
Den vollen Inhalt der Quelle findenBeeching, David. Take another pick: A selection of lipreading exercises. [Stoke on Trent]: [ATLA], 1996.
Den vollen Inhalt der Quelle findenNitchie, Edward Bartlett. Lip reading made easy. Port Townsend, Wash: Breakout Productions, 1998.
Den vollen Inhalt der Quelle findenNitchie, Edward Bartlett. Lip reading made easy. Port Townsend, Wash: Loompanics, 1985.
Den vollen Inhalt der Quelle findenMarcus, Irving S. Your eyes hear for you: A self-help course in speechreading. Bethesda, MD: Self Help for Hard of Hearing People, 1985.
Den vollen Inhalt der Quelle findenCarter, Betty Woerner. I can't hear you in the dark: How to learn and teach lipreading. Springfield, Ill., U.S.A: Charles C. Thomas, 1998.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Lipreading"
Hlaváč, Miroslav, Ivan Gruber, Miloš Železný und Alexey Karpov. „Lipreading with LipsID“. In Speech and Computer, 176–83. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60276-5_18.
Der volle Inhalt der QuelleBregler, Christoph, und Stephen M. Omohundro. „Learning Visual Models for Lipreading“. In Computational Imaging and Vision, 301–20. Dordrecht: Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-015-8935-2_13.
Der volle Inhalt der QuellePaleček, Karel. „Spatiotemporal Convolutional Features for Lipreading“. In Text, Speech, and Dialogue, 438–46. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64206-2_49.
Der volle Inhalt der QuelleSéguier, Renaud, und Nicolas Cladel. „Genetic Snakes: Application on Lipreading“. In Artificial Neural Nets and Genetic Algorithms, 229–33. Vienna: Springer Vienna, 2003. http://dx.doi.org/10.1007/978-3-7091-0646-4_41.
Der volle Inhalt der QuelleVisser, Michiel, Mannes Poel und Anton Nijholt. „Classifying Visemes for Automatic Lipreading“. In Text, Speech and Dialogue, 349–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48239-3_65.
Der volle Inhalt der QuelleYu, Keren, Xiaoyi Jiang und Horst Bunke. „Lipreading using Fourier transform over time“. In Computer Analysis of Images and Patterns, 472–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63460-6_152.
Der volle Inhalt der QuelleSingh, Preety, Vijay Laxmi, Deepika Gupta und M. S. Gaur. „Lipreading Using n–Gram Feature Vector“. In Advances in Intelligent and Soft Computing, 81–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16626-6_9.
Der volle Inhalt der QuelleOwczarek, Agnieszka, und Krzysztof Ślot. „Lipreading Procedure Based on Dynamic Programming“. In Artificial Intelligence and Soft Computing, 559–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29347-4_65.
Der volle Inhalt der QuelleGoldschen, Alan J., Oscar N. Garcia und Eric D. Petajan. „Continuous Automatic Speech Recognition by Lipreading“. In Computational Imaging and Vision, 321–43. Dordrecht: Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-015-8935-2_14.
Der volle Inhalt der QuelleTsunekawa, Takuya, Kazuhiro Hotta und Haruhisa Takahashi. „Lipreading Using Recurrent Neural Prediction Model“. In Lecture Notes in Computer Science, 405–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30126-4_50.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Lipreading"
Yavuz, Zafer, und Vasif V. Nabiyev. „Automatic Lipreading“. In 2007 IEEE 15th Signal Processing and Communications Applications. IEEE, 2007. http://dx.doi.org/10.1109/siu.2007.4298783.
Der volle Inhalt der QuelleGao, Wen, Jiyong Ma, Rui Wang und Hongxun Yao. „Towards robust lipreading“. In 6th International Conference on Spoken Language Processing (ICSLP 2000). ISCA: ISCA, 2000. http://dx.doi.org/10.21437/icslp.2000-467.
Der volle Inhalt der QuelleMase, Kenji, und Alex Pentland. „Lip Reading: Automatic Visual Recognition of Spoken Words“. In Image Understanding and Machine Vision. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/iumv.1989.wc1.
Der volle Inhalt der QuelleBurton, Jake, David Frank, Mahdi Saleh, Nassir Navab und Helen L. Bear. „The speaker-independent lipreading play-off; a survey of lipreading machines“. In 2018 IEEE International Conference on Image Processing, Applications and Systems (IPAS). IEEE, 2018. http://dx.doi.org/10.1109/ipas.2018.8708874.
Der volle Inhalt der QuelleLucey, Patrick, Sridha Sridharan und David Dean. „Continuous pose-invariant lipreading“. In Interspeech 2008. ISCA: ISCA, 2008. http://dx.doi.org/10.21437/interspeech.2008-664.
Der volle Inhalt der QuelleLuettin, Juergen, Neil A. Thacker und Steve W. Beet. „Speaker identification by lipreading“. In 4th International Conference on Spoken Language Processing (ICSLP 1996). ISCA: ISCA, 1996. http://dx.doi.org/10.21437/icslp.1996-16.
Der volle Inhalt der QuelleZhou, Ziheng, Guoying Zhao und Matti Pietikainen. „Lipreading: A Graph Embedding Approach“. In 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.133.
Der volle Inhalt der QuelleMartinez, Brais, Pingchuan Ma, Stavros Petridis und Maja Pantic. „Lipreading Using Temporal Convolutional Networks“. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053841.
Der volle Inhalt der QuelleOng, Eng-Jon, und Richard Bowden. „Learning Sequential Patterns for Lipreading“. In British Machine Vision Conference 2011. British Machine Vision Association, 2011. http://dx.doi.org/10.5244/c.25.55.
Der volle Inhalt der QuelleNoda, Kuniaki, Yuki Yamaguchi, Kazuhiro Nakadai, Hiroshi G. Okuno und Tetsuya Ogata. „Lipreading using convolutional neural network“. In Interspeech 2014. ISCA: ISCA, 2014. http://dx.doi.org/10.21437/interspeech.2014-293.
Der volle Inhalt der Quelle