Articoli di riviste sul tema "Visual speech information"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Visual speech information".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.
Miller, Rachel M., Kauyumari Sanchez e Lawrence D. Rosenblum. "Alignment to visual speech information". Attention, Perception, & Psychophysics 72, n. 6 (agosto 2010): 1614–25. http://dx.doi.org/10.3758/app.72.6.1614.
Testo completoRosenblum, Lawrence D., Deborah A. Yakel, Naser Baseer, Anjani Panchal, Brynn C. Nodarse e Ryan P. Niehus. "Visual speech information for face recognition". Perception & Psychophysics 64, n. 2 (febbraio 2002): 220–29. http://dx.doi.org/10.3758/bf03195788.
Testo completoYakel, Deborah A., e Lawrence D. Rosenblum. "Face identification using visual speech information". Journal of the Acoustical Society of America 100, n. 4 (ottobre 1996): 2570. http://dx.doi.org/10.1121/1.417401.
Testo completoWeinholtz, Chase, e James W. Dias. "Categorical perception of visual speech information". Journal of the Acoustical Society of America 139, n. 4 (aprile 2016): 2018. http://dx.doi.org/10.1121/1.4949950.
Testo completoHISANAGA, Satoko, Kaoru SEKIYAMA, Tomohiko IGASAKI e Nobuki MURAYAMA. "Effects of visual information on audio-visual speech processing". Proceedings of the Annual Convention of the Japanese Psychological Association 75 (15 settembre 2011): 2AM061. http://dx.doi.org/10.4992/pacjpa.75.0_2am061.
Testo completoSell, Andrea J., e Michael P. Kaschak. "Does visual speech information affect word segmentation?" Memory & Cognition 37, n. 6 (settembre 2009): 889–94. http://dx.doi.org/10.3758/mc.37.6.889.
Testo completoHall, Michael D., Paula M. T. Smeele e Patricia K. Kuhl. "Integration of auditory and visual speech information". Journal of the Acoustical Society of America 103, n. 5 (maggio 1998): 2985. http://dx.doi.org/10.1121/1.421677.
Testo completoMcGiverin, Rolland. "Speech, Hearing and Visual". Behavioral & Social Sciences Librarian 8, n. 3-4 (16 aprile 1990): 73–78. http://dx.doi.org/10.1300/j103v08n03_12.
Testo completoHollich, George J., Peter W. Jusczyk e Rochelle S. Newman. "Infants use of visual information in speech segmentation". Journal of the Acoustical Society of America 110, n. 5 (novembre 2001): 2703. http://dx.doi.org/10.1121/1.4777318.
Testo completoTekin, Ender, James Coughlan e Helen Simon. "Improving speech enhancement algorithms by incorporating visual information". Journal of the Acoustical Society of America 134, n. 5 (novembre 2013): 4237. http://dx.doi.org/10.1121/1.4831575.
Testo completoUjiie, Yuta, e Kohske Takahashi. "Weaker McGurk Effect for Rubin’s Vase-Type Speech in People With High Autistic Traits". Multisensory Research 34, n. 6 (16 aprile 2021): 663–79. http://dx.doi.org/10.1163/22134808-bja10047.
Testo completoReed, Rebecca K., e Edward T. Auer. "Influence of visual speech information on the identification of foreign accented speech." Journal of the Acoustical Society of America 125, n. 4 (aprile 2009): 2660. http://dx.doi.org/10.1121/1.4784199.
Testo completoKim, Jeesun, e Chris Davis. "How visual timing and form information affect speech and non-speech processing". Brain and Language 137 (ottobre 2014): 86–90. http://dx.doi.org/10.1016/j.bandl.2014.07.012.
Testo completoSams, M. "Audiovisual Speech Perception". Perception 26, n. 1_suppl (agosto 1997): 347. http://dx.doi.org/10.1068/v970029.
Testo completoPlass, John, David Brang, Satoru Suzuki e Marcia Grabowecky. "Vision perceptually restores auditory spectral dynamics in speech". Proceedings of the National Academy of Sciences 117, n. 29 (6 luglio 2020): 16920–27. http://dx.doi.org/10.1073/pnas.2002887117.
Testo completoKarpov, Alexey Anatolyevich. "Assistive Information Technologies based on Audio-Visual Speech Interfaces". SPIIRAS Proceedings 4, n. 27 (17 marzo 2014): 114. http://dx.doi.org/10.15622/sp.27.10.
Testo completoWhalen, D. H., Julia Irwin e Carol A. Fowler. "Audiovisual integration of speech based on minimal visual information". Journal of the Acoustical Society of America 100, n. 4 (ottobre 1996): 2569. http://dx.doi.org/10.1121/1.417395.
Testo completoGurban, M., e J. P. Thiran. "Information Theoretic Feature Extraction for Audio-Visual Speech Recognition". IEEE Transactions on Signal Processing 57, n. 12 (dicembre 2009): 4765–76. http://dx.doi.org/10.1109/tsp.2009.2026513.
Testo completoMishra, Sushmit, Thomas Lunner, Stefan Stenfelt, Jerker Rönnberg e Mary Rudner. "Visual Information Can Hinder Working Memory Processing of Speech". Journal of Speech, Language, and Hearing Research 56, n. 4 (agosto 2013): 1120–32. http://dx.doi.org/10.1044/1092-4388(2012/12-0033).
Testo completoBorrie, Stephanie A. "Visual speech information: A help or hindrance in perceptual processing of dysarthric speech". Journal of the Acoustical Society of America 137, n. 3 (marzo 2015): 1473–80. http://dx.doi.org/10.1121/1.4913770.
Testo completoWayne, Rachel V., e Ingrid S. Johnsrude. "The role of visual speech information in supporting perceptual learning of degraded speech." Journal of Experimental Psychology: Applied 18, n. 4 (2012): 419–35. http://dx.doi.org/10.1037/a0031042.
Testo completoWinneke, Axel H., e Natalie A. Phillips. "Brain processes underlying the integration of audio-visual speech and non-speech information". Brain and Cognition 67 (giugno 2008): 45. http://dx.doi.org/10.1016/j.bandc.2008.02.096.
Testo completoSánchez-García, Carolina, Sonia Kandel, Christophe Savariaux, Nara Ikumi e Salvador Soto-Faraco. "Time course of audio–visual phoneme identification: A cross-modal gating study". Seeing and Perceiving 25 (2012): 194. http://dx.doi.org/10.1163/187847612x648233.
Testo completoYordamlı, Arzu, e Doğu Erdener. "Auditory–Visual Speech Integration in Bipolar Disorder: A Preliminary Study". Languages 3, n. 4 (17 ottobre 2018): 38. http://dx.doi.org/10.3390/languages3040038.
Testo completoDrijvers, Linda, e Asli Özyürek. "Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension". Journal of Speech, Language, and Hearing Research 60, n. 1 (gennaio 2017): 212–22. http://dx.doi.org/10.1044/2016_jslhr-h-16-0101.
Testo completoRosenblum, Lawrence D. "Speech Perception as a Multimodal Phenomenon". Current Directions in Psychological Science 17, n. 6 (dicembre 2008): 405–9. http://dx.doi.org/10.1111/j.1467-8721.2008.00615.x.
Testo completoMishra, Saumya, Anup Kumar Gupta e Puneet Gupta. "DARE: Deceiving Audio–Visual speech Recognition model". Knowledge-Based Systems 232 (novembre 2021): 107503. http://dx.doi.org/10.1016/j.knosys.2021.107503.
Testo completoCallan, Daniel E., Jeffery A. Jones, Kevin Munhall, Christian Kroos, Akiko M. Callan e Eric Vatikiotis-Bateson. "Multisensory Integration Sites Identified by Perception of Spatial Wavelet Filtered Visual Speech Gesture Information". Journal of Cognitive Neuroscience 16, n. 5 (giugno 2004): 805–16. http://dx.doi.org/10.1162/089892904970771.
Testo completoHertrich, Ingo, Susanne Dietrich e Hermann Ackermann. "Cross-modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study". Journal of Cognitive Neuroscience 23, n. 1 (gennaio 2011): 221–37. http://dx.doi.org/10.1162/jocn.2010.21421.
Testo completoEverdell, Ian T., Heidi Marsh, Micheal D. Yurick, Kevin G. Munhall e Martin Paré. "Gaze Behaviour in Audiovisual Speech Perception: Asymmetrical Distribution of Face-Directed Fixations". Perception 36, n. 10 (ottobre 2007): 1535–45. http://dx.doi.org/10.1068/p5852.
Testo completoJesse, Alexandra, Nick Vrignaud, Michael M. Cohen e Dominic W. Massaro. "The processing of information from multiple sources in simultaneous interpreting". Interpreting. International Journal of Research and Practice in Interpreting 5, n. 2 (31 dicembre 2000): 95–115. http://dx.doi.org/10.1075/intp.5.2.04jes.
Testo completoJia, Xi Bin, e Mei Xia Zheng. "Video Based Visual Speech Feature Model Construction". Applied Mechanics and Materials 182-183 (giugno 2012): 1367–71. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.1367.
Testo completoShi, Li Juan, Ping Feng, Jian Zhao, Li Rong Wang e Na Che. "Study on Dual Mode Fusion Method of Video and Audio". Applied Mechanics and Materials 734 (febbraio 2015): 412–15. http://dx.doi.org/10.4028/www.scientific.net/amm.734.412.
Testo completoDias, James W., e Lawrence D. Rosenblum. "Visual Influences on Interactive Speech Alignment". Perception 40, n. 12 (1 gennaio 2011): 1457–66. http://dx.doi.org/10.1068/p7071.
Testo completoCampbell, Ruth. "The processing of audio-visual speech: empirical and neural bases". Philosophical Transactions of the Royal Society B: Biological Sciences 363, n. 1493 (7 settembre 2007): 1001–10. http://dx.doi.org/10.1098/rstb.2007.2155.
Testo completoMetzger, Brian A. ,., John F. ,. Magnotti, Elizabeth Nesbitt, Daniel Yoshor e Michael S. ,. Beauchamp. "Cross-modal suppression model of speech perception: Visual information drives suppressive interactions between visual and auditory speech in pSTG". Journal of Vision 20, n. 11 (20 ottobre 2020): 434. http://dx.doi.org/10.1167/jov.20.11.434.
Testo completoIrwin, Julia, Trey Avery, Lawrence Brancazio, Jacqueline Turcios, Kayleigh Ryherd e Nicole Landi. "Electrophysiological Indices of Audiovisual Speech Perception: Beyond the McGurk Effect and Speech in Noise". Multisensory Research 31, n. 1-2 (2018): 39–56. http://dx.doi.org/10.1163/22134808-00002580.
Testo completoVan Engen, Kristin J., Jasmine E. B. Phelps, Rajka Smiljanic e Bharath Chandrasekaran. "Enhancing Speech Intelligibility: Interactions Among Context, Modality, Speech Style, and Masker". Journal of Speech, Language, and Hearing Research 57, n. 5 (ottobre 2014): 1908–18. http://dx.doi.org/10.1044/jslhr-h-13-0076.
Testo completoRecords, Nancy L. "A Measure of the Contribution of a Gesture to the Perception of Speech in Listeners With Aphasia". Journal of Speech, Language, and Hearing Research 37, n. 5 (ottobre 1994): 1086–99. http://dx.doi.org/10.1044/jshr.3705.1086.
Testo completoHelfer, Karen S. "Auditory and Auditory-Visual Perception of Clear and Conversational Speech". Journal of Speech, Language, and Hearing Research 40, n. 2 (aprile 1997): 432–43. http://dx.doi.org/10.1044/jslhr.4002.432.
Testo completoTaitelbaum-Swead, Riki, e Leah Fostick. "Auditory and visual information in speech perception: A developmental perspective". Clinical Linguistics & Phonetics 30, n. 7 (30 marzo 2016): 531–45. http://dx.doi.org/10.3109/02699206.2016.1151938.
Testo completoYakel, Deborah A., e Lawrence D. Rosenblum. "Time‐varying information for vowel identification in visual speech perception". Journal of the Acoustical Society of America 108, n. 5 (novembre 2000): 2482. http://dx.doi.org/10.1121/1.4743160.
Testo completoJohnson, Jennifer A., e Lawrence D. Rosenblum. "Hemispheric differences in perceiving and integrating dynamic visual speech information". Journal of the Acoustical Society of America 100, n. 4 (ottobre 1996): 2570. http://dx.doi.org/10.1121/1.417400.
Testo completoOgihara, Akio, Akira Shintani, Naoshi Doi e Kunio Fukunaga. "HMM Speech Recognition Using Fusion of Visual and Auditory Information". IEEJ Transactions on Electronics, Information and Systems 115, n. 11 (1995): 1317–24. http://dx.doi.org/10.1541/ieejeiss1987.115.11_1317.
Testo completoKeintz, Connie K., Kate Bunton e Jeannette D. Hoit. "Influence of Visual Information on the Intelligibility of Dysarthric Speech". American Journal of Speech-Language Pathology 16, n. 3 (agosto 2007): 222–34. http://dx.doi.org/10.1044/1058-0360(2007/027).
Testo completoYuan, Yi, Andrew Lotto e Yonghee Oh. "Temporal cues from visual information benefit speech perception in noise". Journal of the Acoustical Society of America 146, n. 4 (ottobre 2019): 3056. http://dx.doi.org/10.1121/1.5137604.
Testo completoBlank, Helen, e Katharina von Kriegstein. "Mechanisms of enhancing visual–speech recognition by prior auditory information". NeuroImage 65 (gennaio 2013): 109–18. http://dx.doi.org/10.1016/j.neuroimage.2012.09.047.
Testo completoMoon, Il-Joon, Mini Jo, Ga-Young Kim, Nicolas Kim, Young-Sang Cho, Sung-Hwa Hong e Hye-Yoon Seol. "How Does a Face Mask Impact Speech Perception?" Healthcare 10, n. 9 (7 settembre 2022): 1709. http://dx.doi.org/10.3390/healthcare10091709.
Testo completoKubicek, Claudia, Anne Hillairet de Boisferon, Eve Dupierrix, Hélène Lœvenbruck, Judit Gervain e Gudrun Schwarzer. "Face-scanning behavior to silently-talking faces in 12-month-old infants: The impact of pre-exposed auditory speech". International Journal of Behavioral Development 37, n. 2 (25 febbraio 2013): 106–10. http://dx.doi.org/10.1177/0165025412473016.
Testo completoMcCotter, Maxine V., e Timothy R. Jordan. "The Role of Facial Colour and Luminance in Visual and Audiovisual Speech Perception". Perception 32, n. 8 (agosto 2003): 921–36. http://dx.doi.org/10.1068/p3316.
Testo completo