Artigos de revistas sobre o tema "Visual speech information"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Visual speech information".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
Miller, Rachel M., Kauyumari Sanchez e Lawrence D. Rosenblum. "Alignment to visual speech information". Attention, Perception, & Psychophysics 72, n.º 6 (agosto de 2010): 1614–25. http://dx.doi.org/10.3758/app.72.6.1614.
Texto completo da fonteRosenblum, Lawrence D., Deborah A. Yakel, Naser Baseer, Anjani Panchal, Brynn C. Nodarse e Ryan P. Niehus. "Visual speech information for face recognition". Perception & Psychophysics 64, n.º 2 (fevereiro de 2002): 220–29. http://dx.doi.org/10.3758/bf03195788.
Texto completo da fonteYakel, Deborah A., e Lawrence D. Rosenblum. "Face identification using visual speech information". Journal of the Acoustical Society of America 100, n.º 4 (outubro de 1996): 2570. http://dx.doi.org/10.1121/1.417401.
Texto completo da fonteWeinholtz, Chase, e James W. Dias. "Categorical perception of visual speech information". Journal of the Acoustical Society of America 139, n.º 4 (abril de 2016): 2018. http://dx.doi.org/10.1121/1.4949950.
Texto completo da fonteHISANAGA, Satoko, Kaoru SEKIYAMA, Tomohiko IGASAKI e Nobuki MURAYAMA. "Effects of visual information on audio-visual speech processing". Proceedings of the Annual Convention of the Japanese Psychological Association 75 (15 de setembro de 2011): 2AM061. http://dx.doi.org/10.4992/pacjpa.75.0_2am061.
Texto completo da fonteSell, Andrea J., e Michael P. Kaschak. "Does visual speech information affect word segmentation?" Memory & Cognition 37, n.º 6 (setembro de 2009): 889–94. http://dx.doi.org/10.3758/mc.37.6.889.
Texto completo da fonteHall, Michael D., Paula M. T. Smeele e Patricia K. Kuhl. "Integration of auditory and visual speech information". Journal of the Acoustical Society of America 103, n.º 5 (maio de 1998): 2985. http://dx.doi.org/10.1121/1.421677.
Texto completo da fonteMcGiverin, Rolland. "Speech, Hearing and Visual". Behavioral & Social Sciences Librarian 8, n.º 3-4 (16 de abril de 1990): 73–78. http://dx.doi.org/10.1300/j103v08n03_12.
Texto completo da fonteHollich, George J., Peter W. Jusczyk e Rochelle S. Newman. "Infants use of visual information in speech segmentation". Journal of the Acoustical Society of America 110, n.º 5 (novembro de 2001): 2703. http://dx.doi.org/10.1121/1.4777318.
Texto completo da fonteTekin, Ender, James Coughlan e Helen Simon. "Improving speech enhancement algorithms by incorporating visual information". Journal of the Acoustical Society of America 134, n.º 5 (novembro de 2013): 4237. http://dx.doi.org/10.1121/1.4831575.
Texto completo da fonteUjiie, Yuta, e Kohske Takahashi. "Weaker McGurk Effect for Rubin’s Vase-Type Speech in People With High Autistic Traits". Multisensory Research 34, n.º 6 (16 de abril de 2021): 663–79. http://dx.doi.org/10.1163/22134808-bja10047.
Texto completo da fonteReed, Rebecca K., e Edward T. Auer. "Influence of visual speech information on the identification of foreign accented speech." Journal of the Acoustical Society of America 125, n.º 4 (abril de 2009): 2660. http://dx.doi.org/10.1121/1.4784199.
Texto completo da fonteKim, Jeesun, e Chris Davis. "How visual timing and form information affect speech and non-speech processing". Brain and Language 137 (outubro de 2014): 86–90. http://dx.doi.org/10.1016/j.bandl.2014.07.012.
Texto completo da fonteSams, M. "Audiovisual Speech Perception". Perception 26, n.º 1_suppl (agosto de 1997): 347. http://dx.doi.org/10.1068/v970029.
Texto completo da fontePlass, John, David Brang, Satoru Suzuki e Marcia Grabowecky. "Vision perceptually restores auditory spectral dynamics in speech". Proceedings of the National Academy of Sciences 117, n.º 29 (6 de julho de 2020): 16920–27. http://dx.doi.org/10.1073/pnas.2002887117.
Texto completo da fonteKarpov, Alexey Anatolyevich. "Assistive Information Technologies based on Audio-Visual Speech Interfaces". SPIIRAS Proceedings 4, n.º 27 (17 de março de 2014): 114. http://dx.doi.org/10.15622/sp.27.10.
Texto completo da fonteWhalen, D. H., Julia Irwin e Carol A. Fowler. "Audiovisual integration of speech based on minimal visual information". Journal of the Acoustical Society of America 100, n.º 4 (outubro de 1996): 2569. http://dx.doi.org/10.1121/1.417395.
Texto completo da fonteGurban, M., e J. P. Thiran. "Information Theoretic Feature Extraction for Audio-Visual Speech Recognition". IEEE Transactions on Signal Processing 57, n.º 12 (dezembro de 2009): 4765–76. http://dx.doi.org/10.1109/tsp.2009.2026513.
Texto completo da fonteMishra, Sushmit, Thomas Lunner, Stefan Stenfelt, Jerker Rönnberg e Mary Rudner. "Visual Information Can Hinder Working Memory Processing of Speech". Journal of Speech, Language, and Hearing Research 56, n.º 4 (agosto de 2013): 1120–32. http://dx.doi.org/10.1044/1092-4388(2012/12-0033).
Texto completo da fonteBorrie, Stephanie A. "Visual speech information: A help or hindrance in perceptual processing of dysarthric speech". Journal of the Acoustical Society of America 137, n.º 3 (março de 2015): 1473–80. http://dx.doi.org/10.1121/1.4913770.
Texto completo da fonteWayne, Rachel V., e Ingrid S. Johnsrude. "The role of visual speech information in supporting perceptual learning of degraded speech." Journal of Experimental Psychology: Applied 18, n.º 4 (2012): 419–35. http://dx.doi.org/10.1037/a0031042.
Texto completo da fonteWinneke, Axel H., e Natalie A. Phillips. "Brain processes underlying the integration of audio-visual speech and non-speech information". Brain and Cognition 67 (junho de 2008): 45. http://dx.doi.org/10.1016/j.bandc.2008.02.096.
Texto completo da fonteSánchez-García, Carolina, Sonia Kandel, Christophe Savariaux, Nara Ikumi e Salvador Soto-Faraco. "Time course of audio–visual phoneme identification: A cross-modal gating study". Seeing and Perceiving 25 (2012): 194. http://dx.doi.org/10.1163/187847612x648233.
Texto completo da fonteYordamlı, Arzu, e Doğu Erdener. "Auditory–Visual Speech Integration in Bipolar Disorder: A Preliminary Study". Languages 3, n.º 4 (17 de outubro de 2018): 38. http://dx.doi.org/10.3390/languages3040038.
Texto completo da fonteDrijvers, Linda, e Asli Özyürek. "Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension". Journal of Speech, Language, and Hearing Research 60, n.º 1 (janeiro de 2017): 212–22. http://dx.doi.org/10.1044/2016_jslhr-h-16-0101.
Texto completo da fonteRosenblum, Lawrence D. "Speech Perception as a Multimodal Phenomenon". Current Directions in Psychological Science 17, n.º 6 (dezembro de 2008): 405–9. http://dx.doi.org/10.1111/j.1467-8721.2008.00615.x.
Texto completo da fonteMishra, Saumya, Anup Kumar Gupta e Puneet Gupta. "DARE: Deceiving Audio–Visual speech Recognition model". Knowledge-Based Systems 232 (novembro de 2021): 107503. http://dx.doi.org/10.1016/j.knosys.2021.107503.
Texto completo da fonteCallan, Daniel E., Jeffery A. Jones, Kevin Munhall, Christian Kroos, Akiko M. Callan e Eric Vatikiotis-Bateson. "Multisensory Integration Sites Identified by Perception of Spatial Wavelet Filtered Visual Speech Gesture Information". Journal of Cognitive Neuroscience 16, n.º 5 (junho de 2004): 805–16. http://dx.doi.org/10.1162/089892904970771.
Texto completo da fonteHertrich, Ingo, Susanne Dietrich e Hermann Ackermann. "Cross-modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study". Journal of Cognitive Neuroscience 23, n.º 1 (janeiro de 2011): 221–37. http://dx.doi.org/10.1162/jocn.2010.21421.
Texto completo da fonteEverdell, Ian T., Heidi Marsh, Micheal D. Yurick, Kevin G. Munhall e Martin Paré. "Gaze Behaviour in Audiovisual Speech Perception: Asymmetrical Distribution of Face-Directed Fixations". Perception 36, n.º 10 (outubro de 2007): 1535–45. http://dx.doi.org/10.1068/p5852.
Texto completo da fonteJesse, Alexandra, Nick Vrignaud, Michael M. Cohen e Dominic W. Massaro. "The processing of information from multiple sources in simultaneous interpreting". Interpreting. International Journal of Research and Practice in Interpreting 5, n.º 2 (31 de dezembro de 2000): 95–115. http://dx.doi.org/10.1075/intp.5.2.04jes.
Texto completo da fonteJia, Xi Bin, e Mei Xia Zheng. "Video Based Visual Speech Feature Model Construction". Applied Mechanics and Materials 182-183 (junho de 2012): 1367–71. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.1367.
Texto completo da fonteShi, Li Juan, Ping Feng, Jian Zhao, Li Rong Wang e Na Che. "Study on Dual Mode Fusion Method of Video and Audio". Applied Mechanics and Materials 734 (fevereiro de 2015): 412–15. http://dx.doi.org/10.4028/www.scientific.net/amm.734.412.
Texto completo da fonteDias, James W., e Lawrence D. Rosenblum. "Visual Influences on Interactive Speech Alignment". Perception 40, n.º 12 (1 de janeiro de 2011): 1457–66. http://dx.doi.org/10.1068/p7071.
Texto completo da fonteCampbell, Ruth. "The processing of audio-visual speech: empirical and neural bases". Philosophical Transactions of the Royal Society B: Biological Sciences 363, n.º 1493 (7 de setembro de 2007): 1001–10. http://dx.doi.org/10.1098/rstb.2007.2155.
Texto completo da fonteMetzger, Brian A. ,., John F. ,. Magnotti, Elizabeth Nesbitt, Daniel Yoshor e Michael S. ,. Beauchamp. "Cross-modal suppression model of speech perception: Visual information drives suppressive interactions between visual and auditory speech in pSTG". Journal of Vision 20, n.º 11 (20 de outubro de 2020): 434. http://dx.doi.org/10.1167/jov.20.11.434.
Texto completo da fonteIrwin, Julia, Trey Avery, Lawrence Brancazio, Jacqueline Turcios, Kayleigh Ryherd e Nicole Landi. "Electrophysiological Indices of Audiovisual Speech Perception: Beyond the McGurk Effect and Speech in Noise". Multisensory Research 31, n.º 1-2 (2018): 39–56. http://dx.doi.org/10.1163/22134808-00002580.
Texto completo da fonteVan Engen, Kristin J., Jasmine E. B. Phelps, Rajka Smiljanic e Bharath Chandrasekaran. "Enhancing Speech Intelligibility: Interactions Among Context, Modality, Speech Style, and Masker". Journal of Speech, Language, and Hearing Research 57, n.º 5 (outubro de 2014): 1908–18. http://dx.doi.org/10.1044/jslhr-h-13-0076.
Texto completo da fonteRecords, Nancy L. "A Measure of the Contribution of a Gesture to the Perception of Speech in Listeners With Aphasia". Journal of Speech, Language, and Hearing Research 37, n.º 5 (outubro de 1994): 1086–99. http://dx.doi.org/10.1044/jshr.3705.1086.
Texto completo da fonteHelfer, Karen S. "Auditory and Auditory-Visual Perception of Clear and Conversational Speech". Journal of Speech, Language, and Hearing Research 40, n.º 2 (abril de 1997): 432–43. http://dx.doi.org/10.1044/jslhr.4002.432.
Texto completo da fonteTaitelbaum-Swead, Riki, e Leah Fostick. "Auditory and visual information in speech perception: A developmental perspective". Clinical Linguistics & Phonetics 30, n.º 7 (30 de março de 2016): 531–45. http://dx.doi.org/10.3109/02699206.2016.1151938.
Texto completo da fonteYakel, Deborah A., e Lawrence D. Rosenblum. "Time‐varying information for vowel identification in visual speech perception". Journal of the Acoustical Society of America 108, n.º 5 (novembro de 2000): 2482. http://dx.doi.org/10.1121/1.4743160.
Texto completo da fonteJohnson, Jennifer A., e Lawrence D. Rosenblum. "Hemispheric differences in perceiving and integrating dynamic visual speech information". Journal of the Acoustical Society of America 100, n.º 4 (outubro de 1996): 2570. http://dx.doi.org/10.1121/1.417400.
Texto completo da fonteOgihara, Akio, Akira Shintani, Naoshi Doi e Kunio Fukunaga. "HMM Speech Recognition Using Fusion of Visual and Auditory Information". IEEJ Transactions on Electronics, Information and Systems 115, n.º 11 (1995): 1317–24. http://dx.doi.org/10.1541/ieejeiss1987.115.11_1317.
Texto completo da fonteKeintz, Connie K., Kate Bunton e Jeannette D. Hoit. "Influence of Visual Information on the Intelligibility of Dysarthric Speech". American Journal of Speech-Language Pathology 16, n.º 3 (agosto de 2007): 222–34. http://dx.doi.org/10.1044/1058-0360(2007/027).
Texto completo da fonteYuan, Yi, Andrew Lotto e Yonghee Oh. "Temporal cues from visual information benefit speech perception in noise". Journal of the Acoustical Society of America 146, n.º 4 (outubro de 2019): 3056. http://dx.doi.org/10.1121/1.5137604.
Texto completo da fonteBlank, Helen, e Katharina von Kriegstein. "Mechanisms of enhancing visual–speech recognition by prior auditory information". NeuroImage 65 (janeiro de 2013): 109–18. http://dx.doi.org/10.1016/j.neuroimage.2012.09.047.
Texto completo da fonteMoon, Il-Joon, Mini Jo, Ga-Young Kim, Nicolas Kim, Young-Sang Cho, Sung-Hwa Hong e Hye-Yoon Seol. "How Does a Face Mask Impact Speech Perception?" Healthcare 10, n.º 9 (7 de setembro de 2022): 1709. http://dx.doi.org/10.3390/healthcare10091709.
Texto completo da fonteKubicek, Claudia, Anne Hillairet de Boisferon, Eve Dupierrix, Hélène Lœvenbruck, Judit Gervain e Gudrun Schwarzer. "Face-scanning behavior to silently-talking faces in 12-month-old infants: The impact of pre-exposed auditory speech". International Journal of Behavioral Development 37, n.º 2 (25 de fevereiro de 2013): 106–10. http://dx.doi.org/10.1177/0165025412473016.
Texto completo da fonteMcCotter, Maxine V., e Timothy R. Jordan. "The Role of Facial Colour and Luminance in Visual and Audiovisual Speech Perception". Perception 32, n.º 8 (agosto de 2003): 921–36. http://dx.doi.org/10.1068/p3316.
Texto completo da fonte