Journal articles on the topic 'Visual speech information'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Visual speech information.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Miller, Rachel M., Kauyumari Sanchez, and Lawrence D. Rosenblum. "Alignment to visual speech information." Attention, Perception, & Psychophysics 72, no. 6 (August 2010): 1614–25. http://dx.doi.org/10.3758/app.72.6.1614.
Full textRosenblum, Lawrence D., Deborah A. Yakel, Naser Baseer, Anjani Panchal, Brynn C. Nodarse, and Ryan P. Niehus. "Visual speech information for face recognition." Perception & Psychophysics 64, no. 2 (February 2002): 220–29. http://dx.doi.org/10.3758/bf03195788.
Full textYakel, Deborah A., and Lawrence D. Rosenblum. "Face identification using visual speech information." Journal of the Acoustical Society of America 100, no. 4 (October 1996): 2570. http://dx.doi.org/10.1121/1.417401.
Full textWeinholtz, Chase, and James W. Dias. "Categorical perception of visual speech information." Journal of the Acoustical Society of America 139, no. 4 (April 2016): 2018. http://dx.doi.org/10.1121/1.4949950.
Full textHISANAGA, Satoko, Kaoru SEKIYAMA, Tomohiko IGASAKI, and Nobuki MURAYAMA. "Effects of visual information on audio-visual speech processing." Proceedings of the Annual Convention of the Japanese Psychological Association 75 (September 15, 2011): 2AM061. http://dx.doi.org/10.4992/pacjpa.75.0_2am061.
Full textSell, Andrea J., and Michael P. Kaschak. "Does visual speech information affect word segmentation?" Memory & Cognition 37, no. 6 (September 2009): 889–94. http://dx.doi.org/10.3758/mc.37.6.889.
Full textHall, Michael D., Paula M. T. Smeele, and Patricia K. Kuhl. "Integration of auditory and visual speech information." Journal of the Acoustical Society of America 103, no. 5 (May 1998): 2985. http://dx.doi.org/10.1121/1.421677.
Full textMcGiverin, Rolland. "Speech, Hearing and Visual." Behavioral & Social Sciences Librarian 8, no. 3-4 (April 16, 1990): 73–78. http://dx.doi.org/10.1300/j103v08n03_12.
Full textHollich, George J., Peter W. Jusczyk, and Rochelle S. Newman. "Infants use of visual information in speech segmentation." Journal of the Acoustical Society of America 110, no. 5 (November 2001): 2703. http://dx.doi.org/10.1121/1.4777318.
Full textTekin, Ender, James Coughlan, and Helen Simon. "Improving speech enhancement algorithms by incorporating visual information." Journal of the Acoustical Society of America 134, no. 5 (November 2013): 4237. http://dx.doi.org/10.1121/1.4831575.
Full textUjiie, Yuta, and Kohske Takahashi. "Weaker McGurk Effect for Rubin’s Vase-Type Speech in People With High Autistic Traits." Multisensory Research 34, no. 6 (April 16, 2021): 663–79. http://dx.doi.org/10.1163/22134808-bja10047.
Full textReed, Rebecca K., and Edward T. Auer. "Influence of visual speech information on the identification of foreign accented speech." Journal of the Acoustical Society of America 125, no. 4 (April 2009): 2660. http://dx.doi.org/10.1121/1.4784199.
Full textKim, Jeesun, and Chris Davis. "How visual timing and form information affect speech and non-speech processing." Brain and Language 137 (October 2014): 86–90. http://dx.doi.org/10.1016/j.bandl.2014.07.012.
Full textSams, M. "Audiovisual Speech Perception." Perception 26, no. 1_suppl (August 1997): 347. http://dx.doi.org/10.1068/v970029.
Full textPlass, John, David Brang, Satoru Suzuki, and Marcia Grabowecky. "Vision perceptually restores auditory spectral dynamics in speech." Proceedings of the National Academy of Sciences 117, no. 29 (July 6, 2020): 16920–27. http://dx.doi.org/10.1073/pnas.2002887117.
Full textKarpov, Alexey Anatolyevich. "Assistive Information Technologies based on Audio-Visual Speech Interfaces." SPIIRAS Proceedings 4, no. 27 (March 17, 2014): 114. http://dx.doi.org/10.15622/sp.27.10.
Full textWhalen, D. H., Julia Irwin, and Carol A. Fowler. "Audiovisual integration of speech based on minimal visual information." Journal of the Acoustical Society of America 100, no. 4 (October 1996): 2569. http://dx.doi.org/10.1121/1.417395.
Full textGurban, M., and J. P. Thiran. "Information Theoretic Feature Extraction for Audio-Visual Speech Recognition." IEEE Transactions on Signal Processing 57, no. 12 (December 2009): 4765–76. http://dx.doi.org/10.1109/tsp.2009.2026513.
Full textMishra, Sushmit, Thomas Lunner, Stefan Stenfelt, Jerker Rönnberg, and Mary Rudner. "Visual Information Can Hinder Working Memory Processing of Speech." Journal of Speech, Language, and Hearing Research 56, no. 4 (August 2013): 1120–32. http://dx.doi.org/10.1044/1092-4388(2012/12-0033).
Full textBorrie, Stephanie A. "Visual speech information: A help or hindrance in perceptual processing of dysarthric speech." Journal of the Acoustical Society of America 137, no. 3 (March 2015): 1473–80. http://dx.doi.org/10.1121/1.4913770.
Full textWayne, Rachel V., and Ingrid S. Johnsrude. "The role of visual speech information in supporting perceptual learning of degraded speech." Journal of Experimental Psychology: Applied 18, no. 4 (2012): 419–35. http://dx.doi.org/10.1037/a0031042.
Full textWinneke, Axel H., and Natalie A. Phillips. "Brain processes underlying the integration of audio-visual speech and non-speech information." Brain and Cognition 67 (June 2008): 45. http://dx.doi.org/10.1016/j.bandc.2008.02.096.
Full textSánchez-García, Carolina, Sonia Kandel, Christophe Savariaux, Nara Ikumi, and Salvador Soto-Faraco. "Time course of audio–visual phoneme identification: A cross-modal gating study." Seeing and Perceiving 25 (2012): 194. http://dx.doi.org/10.1163/187847612x648233.
Full textYordamlı, Arzu, and Doğu Erdener. "Auditory–Visual Speech Integration in Bipolar Disorder: A Preliminary Study." Languages 3, no. 4 (October 17, 2018): 38. http://dx.doi.org/10.3390/languages3040038.
Full textDrijvers, Linda, and Asli Özyürek. "Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension." Journal of Speech, Language, and Hearing Research 60, no. 1 (January 2017): 212–22. http://dx.doi.org/10.1044/2016_jslhr-h-16-0101.
Full textRosenblum, Lawrence D. "Speech Perception as a Multimodal Phenomenon." Current Directions in Psychological Science 17, no. 6 (December 2008): 405–9. http://dx.doi.org/10.1111/j.1467-8721.2008.00615.x.
Full textMishra, Saumya, Anup Kumar Gupta, and Puneet Gupta. "DARE: Deceiving Audio–Visual speech Recognition model." Knowledge-Based Systems 232 (November 2021): 107503. http://dx.doi.org/10.1016/j.knosys.2021.107503.
Full textCallan, Daniel E., Jeffery A. Jones, Kevin Munhall, Christian Kroos, Akiko M. Callan, and Eric Vatikiotis-Bateson. "Multisensory Integration Sites Identified by Perception of Spatial Wavelet Filtered Visual Speech Gesture Information." Journal of Cognitive Neuroscience 16, no. 5 (June 2004): 805–16. http://dx.doi.org/10.1162/089892904970771.
Full textHertrich, Ingo, Susanne Dietrich, and Hermann Ackermann. "Cross-modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study." Journal of Cognitive Neuroscience 23, no. 1 (January 2011): 221–37. http://dx.doi.org/10.1162/jocn.2010.21421.
Full textEverdell, Ian T., Heidi Marsh, Micheal D. Yurick, Kevin G. Munhall, and Martin Paré. "Gaze Behaviour in Audiovisual Speech Perception: Asymmetrical Distribution of Face-Directed Fixations." Perception 36, no. 10 (October 2007): 1535–45. http://dx.doi.org/10.1068/p5852.
Full textJesse, Alexandra, Nick Vrignaud, Michael M. Cohen, and Dominic W. Massaro. "The processing of information from multiple sources in simultaneous interpreting." Interpreting. International Journal of Research and Practice in Interpreting 5, no. 2 (December 31, 2000): 95–115. http://dx.doi.org/10.1075/intp.5.2.04jes.
Full textJia, Xi Bin, and Mei Xia Zheng. "Video Based Visual Speech Feature Model Construction." Applied Mechanics and Materials 182-183 (June 2012): 1367–71. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.1367.
Full textShi, Li Juan, Ping Feng, Jian Zhao, Li Rong Wang, and Na Che. "Study on Dual Mode Fusion Method of Video and Audio." Applied Mechanics and Materials 734 (February 2015): 412–15. http://dx.doi.org/10.4028/www.scientific.net/amm.734.412.
Full textDias, James W., and Lawrence D. Rosenblum. "Visual Influences on Interactive Speech Alignment." Perception 40, no. 12 (January 1, 2011): 1457–66. http://dx.doi.org/10.1068/p7071.
Full textCampbell, Ruth. "The processing of audio-visual speech: empirical and neural bases." Philosophical Transactions of the Royal Society B: Biological Sciences 363, no. 1493 (September 7, 2007): 1001–10. http://dx.doi.org/10.1098/rstb.2007.2155.
Full textMetzger, Brian A. ,., John F. ,. Magnotti, Elizabeth Nesbitt, Daniel Yoshor, and Michael S. ,. Beauchamp. "Cross-modal suppression model of speech perception: Visual information drives suppressive interactions between visual and auditory speech in pSTG." Journal of Vision 20, no. 11 (October 20, 2020): 434. http://dx.doi.org/10.1167/jov.20.11.434.
Full textIrwin, Julia, Trey Avery, Lawrence Brancazio, Jacqueline Turcios, Kayleigh Ryherd, and Nicole Landi. "Electrophysiological Indices of Audiovisual Speech Perception: Beyond the McGurk Effect and Speech in Noise." Multisensory Research 31, no. 1-2 (2018): 39–56. http://dx.doi.org/10.1163/22134808-00002580.
Full textVan Engen, Kristin J., Jasmine E. B. Phelps, Rajka Smiljanic, and Bharath Chandrasekaran. "Enhancing Speech Intelligibility: Interactions Among Context, Modality, Speech Style, and Masker." Journal of Speech, Language, and Hearing Research 57, no. 5 (October 2014): 1908–18. http://dx.doi.org/10.1044/jslhr-h-13-0076.
Full textRecords, Nancy L. "A Measure of the Contribution of a Gesture to the Perception of Speech in Listeners With Aphasia." Journal of Speech, Language, and Hearing Research 37, no. 5 (October 1994): 1086–99. http://dx.doi.org/10.1044/jshr.3705.1086.
Full textHelfer, Karen S. "Auditory and Auditory-Visual Perception of Clear and Conversational Speech." Journal of Speech, Language, and Hearing Research 40, no. 2 (April 1997): 432–43. http://dx.doi.org/10.1044/jslhr.4002.432.
Full textTaitelbaum-Swead, Riki, and Leah Fostick. "Auditory and visual information in speech perception: A developmental perspective." Clinical Linguistics & Phonetics 30, no. 7 (March 30, 2016): 531–45. http://dx.doi.org/10.3109/02699206.2016.1151938.
Full textYakel, Deborah A., and Lawrence D. Rosenblum. "Time‐varying information for vowel identification in visual speech perception." Journal of the Acoustical Society of America 108, no. 5 (November 2000): 2482. http://dx.doi.org/10.1121/1.4743160.
Full textJohnson, Jennifer A., and Lawrence D. Rosenblum. "Hemispheric differences in perceiving and integrating dynamic visual speech information." Journal of the Acoustical Society of America 100, no. 4 (October 1996): 2570. http://dx.doi.org/10.1121/1.417400.
Full textOgihara, Akio, Akira Shintani, Naoshi Doi, and Kunio Fukunaga. "HMM Speech Recognition Using Fusion of Visual and Auditory Information." IEEJ Transactions on Electronics, Information and Systems 115, no. 11 (1995): 1317–24. http://dx.doi.org/10.1541/ieejeiss1987.115.11_1317.
Full textKeintz, Connie K., Kate Bunton, and Jeannette D. Hoit. "Influence of Visual Information on the Intelligibility of Dysarthric Speech." American Journal of Speech-Language Pathology 16, no. 3 (August 2007): 222–34. http://dx.doi.org/10.1044/1058-0360(2007/027).
Full textYuan, Yi, Andrew Lotto, and Yonghee Oh. "Temporal cues from visual information benefit speech perception in noise." Journal of the Acoustical Society of America 146, no. 4 (October 2019): 3056. http://dx.doi.org/10.1121/1.5137604.
Full textBlank, Helen, and Katharina von Kriegstein. "Mechanisms of enhancing visual–speech recognition by prior auditory information." NeuroImage 65 (January 2013): 109–18. http://dx.doi.org/10.1016/j.neuroimage.2012.09.047.
Full textMoon, Il-Joon, Mini Jo, Ga-Young Kim, Nicolas Kim, Young-Sang Cho, Sung-Hwa Hong, and Hye-Yoon Seol. "How Does a Face Mask Impact Speech Perception?" Healthcare 10, no. 9 (September 7, 2022): 1709. http://dx.doi.org/10.3390/healthcare10091709.
Full textKubicek, Claudia, Anne Hillairet de Boisferon, Eve Dupierrix, Hélène Lœvenbruck, Judit Gervain, and Gudrun Schwarzer. "Face-scanning behavior to silently-talking faces in 12-month-old infants: The impact of pre-exposed auditory speech." International Journal of Behavioral Development 37, no. 2 (February 25, 2013): 106–10. http://dx.doi.org/10.1177/0165025412473016.
Full textMcCotter, Maxine V., and Timothy R. Jordan. "The Role of Facial Colour and Luminance in Visual and Audiovisual Speech Perception." Perception 32, no. 8 (August 2003): 921–36. http://dx.doi.org/10.1068/p3316.
Full text