Gotowa bibliografia na temat „Lipreading”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Lipreading”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Lipreading"
Lynch, Michael P., Rebecca E. Eilers, D. Kimbrough Oller, Richard C. Urbano i Patricia J. Pero. "Multisensory Narrative Tracking by a Profoundly Deaf Subject Using an Electrocutaneous Vocoder and a Vibrotactile Aid". Journal of Speech, Language, and Hearing Research 32, nr 2 (czerwiec 1989): 331–38. http://dx.doi.org/10.1044/jshr.3202.331.
Pełny tekst źródłaTye-Murray, Nancy, Sandra Hale, Brent Spehar, Joel Myerson i Mitchell S. Sommers. "Lipreading in School-Age Children: The Roles of Age, Hearing Status, and Cognitive Ability". Journal of Speech, Language, and Hearing Research 57, nr 2 (kwiecień 2014): 556–65. http://dx.doi.org/10.1044/2013_jslhr-h-12-0273.
Pełny tekst źródłaHawes, Nancy A. "Lipreading for Children: A Synthetic Approach to Lipreading". Ear and Hearing 9, nr 6 (grudzień 1988): 356. http://dx.doi.org/10.1097/00003446-198812000-00018.
Pełny tekst źródłaPaulesu, E., D. Perani, V. Blasi, G. Silani, N. A. Borghese, U. De Giovanni, S. Sensolo i F. Fazio. "A Functional-Anatomical Model for Lipreading". Journal of Neurophysiology 90, nr 3 (wrzesień 2003): 2005–13. http://dx.doi.org/10.1152/jn.00926.2002.
Pełny tekst źródłaHeikkilä, Jenni, Eila Lonka, Sanna Ahola, Auli Meronen i Kaisa Tiippana. "Lipreading Ability and Its Cognitive Correlates in Typically Developing Children and Children With Specific Language Impairment". Journal of Speech, Language, and Hearing Research 60, nr 3 (marzec 2017): 485–93. http://dx.doi.org/10.1044/2016_jslhr-s-15-0071.
Pełny tekst źródłaOrtiz, Isabel de los Reyes Rodríguez. "Lipreading in the Prelingually Deaf: What makes a Skilled Speechreader?" Spanish Journal of Psychology 11, nr 2 (listopad 2008): 488–502. http://dx.doi.org/10.1017/s1138741600004492.
Pełny tekst źródłaPlant, Geoff, Johan Gnosspelius i Harry Levitt. "The Use of Tactile Supplements in Lipreading Swedish and English". Journal of Speech, Language, and Hearing Research 43, nr 1 (luty 2000): 172–83. http://dx.doi.org/10.1044/jslhr.4301.172.
Pełny tekst źródłaSuess, Nina, Anne Hauswald, Verena Zehentner, Jessica Depireux, Gudrun Herzog, Sebastian Rösch i Nathan Weisz. "Influence of linguistic properties and hearing impairment on visual speech perception skills in the German language". PLOS ONE 17, nr 9 (30.09.2022): e0275585. http://dx.doi.org/10.1371/journal.pone.0275585.
Pełny tekst źródłaZhang, Tao, Lun He, Xudong Li i Guoqing Feng. "Efficient End-to-End Sentence-Level Lipreading with Temporal Convolutional Networks". Applied Sciences 11, nr 15 (29.07.2021): 6975. http://dx.doi.org/10.3390/app11156975.
Pełny tekst źródłaKumar, Yaman, Rohit Jain, Khwaja Mohd Salik, Rajiv Ratn Shah, Yifang Yin i Roger Zimmermann. "Lipper: Synthesizing Thy Speech Using Multi-View Lipreading". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 2588–95. http://dx.doi.org/10.1609/aaai.v33i01.33012588.
Pełny tekst źródłaRozprawy doktorskie na temat "Lipreading"
Lucey, Patrick Joseph. "Lipreading across multiple views". Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16676/1/Patrick_Joseph_Lucey_Thesis.pdf.
Pełny tekst źródłaLucey, Patrick Joseph. "Lipreading across multiple views". Queensland University of Technology, 2007. http://eprints.qut.edu.au/16676/.
Pełny tekst źródłaMacLeod, A. "Effective methods for measuring lipreading skills". Thesis, University of Nottingham, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233400.
Pełny tekst źródłaMacDermid, Catriona. "Lipreading and language processing by deaf children". Thesis, University of Surrey, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.291020.
Pełny tekst źródłaYuan, Hanfeng 1972. "Tactual display of consonant voicing to supplement lipreading". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87906.
Pełny tekst źródłaIncludes bibliographical references (p. 241-251).
This research is concerned with the development of tactual displays to supplement the information available through lipreading. Because voicing carries a high informational load in speech and is not well transmitted through lipreading, the efforts are focused on providing tactual displays of voicing to supplement the information available on the lips of the talker. This research includes exploration of 1) signal-processing schemes to extract information about voicing from the acoustic speech signal, 2) methods of displaying this information through a multi-finger tactual display, and 3) perceptual evaluations of voicing reception through the tactual display alone (T), lipreading alone (L), and the combined condition (L+T). Signal processing for the extraction of voicing information used amplitude-envelope signals derived from filtered bands of speech (i.e., envelopes derived from a lowpass-filtered band at 350 Hz and from a highpass-filtered band at 3000 Hz). Acoustic measurements made on the envelope signals of a set of 16 initial consonants represented through multiple tokens of C₁VC₂ syllables indicate that the onset-timing difference between the low- and high-frequency envelopes (EOA: envelope-onset asynchrony) provides a reliable and robust cue for distinguishing voiced from voiceless consonants. This acoustic cue was presented through a two-finger tactual display such that the envelope of the high-frequency band was used to modulate a 250-Hz carrier signal delivered to the index finger (250-I) and the envelope of the low-frequency band was used to modulate a 50-Hz carrier delivered to the thumb (50T).
(cont.) The temporal-onset order threshold for these two signals, measured with roving signal amplitude and duration, averaged 34 msec, sufficiently small for use of the EOA cue. Perceptual evaluations of the tactual display of EOA with speech signal indicated: 1) that the cue was highly effective for discrimination of pairs of voicing contrasts; 2) that the identification of 16 consonants was improved by roughly 15 percentage points with the addition of the tactual cue over L alone; and 3) that no improvements in L+T over L were observed for reception of words in sentences, indicating the need for further training on this task.
by Hanfeng Yuan.
Ph.D.
Chiou, Greg I. "Active contour models for distinct feature tracking and lipreading /". Thesis, Connect to this title online; UW restricted, 1995. http://hdl.handle.net/1773/6023.
Pełny tekst źródłaKaucic, Robert August. "Lip tracking for audio-visual speech recognition". Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360392.
Pełny tekst źródłaMatthews, Iain. "Features for audio-visual speech recognition". Thesis, University of East Anglia, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266736.
Pełny tekst źródłaThangthai, Kwanchiva. "Computer lipreading via hybrid deep neural network hidden Markov models". Thesis, University of East Anglia, 2018. https://ueaeprints.uea.ac.uk/69215/.
Pełny tekst źródłaHiramatsu, Sandra. "Does lipreading help word reading? : an investigation of the relationship between visible speech and early reading achievement /". Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/7913.
Pełny tekst źródłaKsiążki na temat "Lipreading"
Woods, John Chaloner. Lipreading: A guide for beginners. London: Royal National Institute for the Deaf, 1991.
Znajdź pełny tekst źródłaErickson, Joan Good. Speech reading: An aid to communication. Wyd. 2. Danville, Ill: Interstate Printers & Publishers, 1989.
Znajdź pełny tekst źródłaChaloner, Woods John, red. Watch this face: A practical guide to lipreading. London: Royal National Institute for Deaf People, 2003.
Znajdź pełny tekst źródłaDupret, Jean-Pierre. Stratégies visuelles dans la lecture labiale. Hamburg: H. Buske, 1986.
Znajdź pełny tekst źródłaMartin, Christine. Speech perception: Writing functional material for lipreading classes. [S.l]: [s.n.], 1995.
Znajdź pełny tekst źródłaBeeching, David. Take another pick: A selection of lipreading exercises. [Stoke on Trent]: [ATLA], 1996.
Znajdź pełny tekst źródłaNitchie, Edward Bartlett. Lip reading made easy. Port Townsend, Wash: Breakout Productions, 1998.
Znajdź pełny tekst źródłaNitchie, Edward Bartlett. Lip reading made easy. Port Townsend, Wash: Loompanics, 1985.
Znajdź pełny tekst źródłaMarcus, Irving S. Your eyes hear for you: A self-help course in speechreading. Bethesda, MD: Self Help for Hard of Hearing People, 1985.
Znajdź pełny tekst źródłaCarter, Betty Woerner. I can't hear you in the dark: How to learn and teach lipreading. Springfield, Ill., U.S.A: Charles C. Thomas, 1998.
Znajdź pełny tekst źródłaCzęści książek na temat "Lipreading"
Hlaváč, Miroslav, Ivan Gruber, Miloš Železný i Alexey Karpov. "Lipreading with LipsID". W Speech and Computer, 176–83. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60276-5_18.
Pełny tekst źródłaBregler, Christoph, i Stephen M. Omohundro. "Learning Visual Models for Lipreading". W Computational Imaging and Vision, 301–20. Dordrecht: Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-015-8935-2_13.
Pełny tekst źródłaPaleček, Karel. "Spatiotemporal Convolutional Features for Lipreading". W Text, Speech, and Dialogue, 438–46. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64206-2_49.
Pełny tekst źródłaSéguier, Renaud, i Nicolas Cladel. "Genetic Snakes: Application on Lipreading". W Artificial Neural Nets and Genetic Algorithms, 229–33. Vienna: Springer Vienna, 2003. http://dx.doi.org/10.1007/978-3-7091-0646-4_41.
Pełny tekst źródłaVisser, Michiel, Mannes Poel i Anton Nijholt. "Classifying Visemes for Automatic Lipreading". W Text, Speech and Dialogue, 349–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48239-3_65.
Pełny tekst źródłaYu, Keren, Xiaoyi Jiang i Horst Bunke. "Lipreading using Fourier transform over time". W Computer Analysis of Images and Patterns, 472–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63460-6_152.
Pełny tekst źródłaSingh, Preety, Vijay Laxmi, Deepika Gupta i M. S. Gaur. "Lipreading Using n–Gram Feature Vector". W Advances in Intelligent and Soft Computing, 81–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16626-6_9.
Pełny tekst źródłaOwczarek, Agnieszka, i Krzysztof Ślot. "Lipreading Procedure Based on Dynamic Programming". W Artificial Intelligence and Soft Computing, 559–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29347-4_65.
Pełny tekst źródłaGoldschen, Alan J., Oscar N. Garcia i Eric D. Petajan. "Continuous Automatic Speech Recognition by Lipreading". W Computational Imaging and Vision, 321–43. Dordrecht: Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-015-8935-2_14.
Pełny tekst źródłaTsunekawa, Takuya, Kazuhiro Hotta i Haruhisa Takahashi. "Lipreading Using Recurrent Neural Prediction Model". W Lecture Notes in Computer Science, 405–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30126-4_50.
Pełny tekst źródłaStreszczenia konferencji na temat "Lipreading"
Yavuz, Zafer, i Vasif V. Nabiyev. "Automatic Lipreading". W 2007 IEEE 15th Signal Processing and Communications Applications. IEEE, 2007. http://dx.doi.org/10.1109/siu.2007.4298783.
Pełny tekst źródłaGao, Wen, Jiyong Ma, Rui Wang i Hongxun Yao. "Towards robust lipreading". W 6th International Conference on Spoken Language Processing (ICSLP 2000). ISCA: ISCA, 2000. http://dx.doi.org/10.21437/icslp.2000-467.
Pełny tekst źródłaMase, Kenji, i Alex Pentland. "Lip Reading: Automatic Visual Recognition of Spoken Words". W Image Understanding and Machine Vision. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/iumv.1989.wc1.
Pełny tekst źródłaBurton, Jake, David Frank, Mahdi Saleh, Nassir Navab i Helen L. Bear. "The speaker-independent lipreading play-off; a survey of lipreading machines". W 2018 IEEE International Conference on Image Processing, Applications and Systems (IPAS). IEEE, 2018. http://dx.doi.org/10.1109/ipas.2018.8708874.
Pełny tekst źródłaLucey, Patrick, Sridha Sridharan i David Dean. "Continuous pose-invariant lipreading". W Interspeech 2008. ISCA: ISCA, 2008. http://dx.doi.org/10.21437/interspeech.2008-664.
Pełny tekst źródłaLuettin, Juergen, Neil A. Thacker i Steve W. Beet. "Speaker identification by lipreading". W 4th International Conference on Spoken Language Processing (ICSLP 1996). ISCA: ISCA, 1996. http://dx.doi.org/10.21437/icslp.1996-16.
Pełny tekst źródłaZhou, Ziheng, Guoying Zhao i Matti Pietikainen. "Lipreading: A Graph Embedding Approach". W 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.133.
Pełny tekst źródłaMartinez, Brais, Pingchuan Ma, Stavros Petridis i Maja Pantic. "Lipreading Using Temporal Convolutional Networks". W ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053841.
Pełny tekst źródłaOng, Eng-Jon, i Richard Bowden. "Learning Sequential Patterns for Lipreading". W British Machine Vision Conference 2011. British Machine Vision Association, 2011. http://dx.doi.org/10.5244/c.25.55.
Pełny tekst źródłaNoda, Kuniaki, Yuki Yamaguchi, Kazuhiro Nakadai, Hiroshi G. Okuno i Tetsuya Ogata. "Lipreading using convolutional neural network". W Interspeech 2014. ISCA: ISCA, 2014. http://dx.doi.org/10.21437/interspeech.2014-293.
Pełny tekst źródła