Literatura académica sobre el tema "Visual speech information"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Visual speech information".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Visual speech information"
Miller, Rachel M., Kauyumari Sanchez y Lawrence D. Rosenblum. "Alignment to visual speech information". Attention, Perception, & Psychophysics 72, n.º 6 (agosto de 2010): 1614–25. http://dx.doi.org/10.3758/app.72.6.1614.
Texto completoRosenblum, Lawrence D., Deborah A. Yakel, Naser Baseer, Anjani Panchal, Brynn C. Nodarse y Ryan P. Niehus. "Visual speech information for face recognition". Perception & Psychophysics 64, n.º 2 (febrero de 2002): 220–29. http://dx.doi.org/10.3758/bf03195788.
Texto completoYakel, Deborah A. y Lawrence D. Rosenblum. "Face identification using visual speech information". Journal of the Acoustical Society of America 100, n.º 4 (octubre de 1996): 2570. http://dx.doi.org/10.1121/1.417401.
Texto completoWeinholtz, Chase y James W. Dias. "Categorical perception of visual speech information". Journal of the Acoustical Society of America 139, n.º 4 (abril de 2016): 2018. http://dx.doi.org/10.1121/1.4949950.
Texto completoHISANAGA, Satoko, Kaoru SEKIYAMA, Tomohiko IGASAKI y Nobuki MURAYAMA. "Effects of visual information on audio-visual speech processing". Proceedings of the Annual Convention of the Japanese Psychological Association 75 (15 de septiembre de 2011): 2AM061. http://dx.doi.org/10.4992/pacjpa.75.0_2am061.
Texto completoSell, Andrea J. y Michael P. Kaschak. "Does visual speech information affect word segmentation?" Memory & Cognition 37, n.º 6 (septiembre de 2009): 889–94. http://dx.doi.org/10.3758/mc.37.6.889.
Texto completoHall, Michael D., Paula M. T. Smeele y Patricia K. Kuhl. "Integration of auditory and visual speech information". Journal of the Acoustical Society of America 103, n.º 5 (mayo de 1998): 2985. http://dx.doi.org/10.1121/1.421677.
Texto completoMcGiverin, Rolland. "Speech, Hearing and Visual". Behavioral & Social Sciences Librarian 8, n.º 3-4 (16 de abril de 1990): 73–78. http://dx.doi.org/10.1300/j103v08n03_12.
Texto completoHollich, George J., Peter W. Jusczyk y Rochelle S. Newman. "Infants use of visual information in speech segmentation". Journal of the Acoustical Society of America 110, n.º 5 (noviembre de 2001): 2703. http://dx.doi.org/10.1121/1.4777318.
Texto completoTekin, Ender, James Coughlan y Helen Simon. "Improving speech enhancement algorithms by incorporating visual information". Journal of the Acoustical Society of America 134, n.º 5 (noviembre de 2013): 4237. http://dx.doi.org/10.1121/1.4831575.
Texto completoTesis sobre el tema "Visual speech information"
Le, Cornu Thomas. "Reconstruction of intelligible audio speech from visual speech information". Thesis, University of East Anglia, 2016. https://ueaeprints.uea.ac.uk/67012/.
Texto completoAndrews, Brandie. "Auditory and visual information facilitating speech integration". Connect to resource, 2007. http://hdl.handle.net/1811/25202.
Texto completoTitle from first page of PDF file. Document formatted into pages: contains 43 p.; also includes graphics. Includes bibliographical references (p. 27-28). Available online via Ohio State University's Knowledge Bank.
Fixmer, Eric Norbert Charles. "Grouping of auditory and visual information in speech". Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612553.
Texto completoKeintz, Constance Kay. "Influence of visual information on the intelligibility of dysarthric speech". Diss., The University of Arizona, 2005. http://hdl.handle.net/10150/280714.
Texto completoHagrot, Joel. "A Data-Driven Approach For Automatic Visual Speech In Swedish Speech Synthesis Applications". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246393.
Texto completoDetta projekt utreder hur artificiella neuronnät kan användas för visuell talsyntes. Ändamålet var att ta fram ett ramverk för animerade chatbotar på svenska. En översikt över litteraturen kom fram till att state-of-the-art-metoden var att använda artificiella neuronnät med antingen ljud eller fonemsekvenser som indata. Tre enkäter genomfördes, både i den slutgiltiga produktens kontext, samt i en mer neutral kontext med mindre bearbetning. De jämförde sanningsdatat, inspelat med iPhone X:s djupsensorkamera, med både neuronnätsmodellen och en grundläggande så kallad baselinemodell. Den statistiska analysen använde mixed effects-modeller för att hitta statistiskt signifikanta skillnader i resultaten. Även den temporala dynamiken analyserades. Resultaten visar att ett relativt enkelt neuronnät kunde lära sig att generera blendshapesekvenser utifrån fonemsekvenser med tillfredsställande resultat, förutom att krav såsom läppslutning för vissa konsonanter inte alltid uppfylldes. Problemen med konsonanter kunde också i viss mån ses i sanningsdatat. Detta kunde lösas med hjälp av konsonantspecifik bearbetning, vilket gjorde att neuronnätets animationer var oskiljbara från sanningsdatat och att de samtidigt upplevdes vara bättre än baselinemodellens animationer. Sammanfattningsvis så lärde sig neuronnätet vokaler väl, men hade antagligen behövt mer data för att på ett tillfredsställande sätt uppfylla kraven för vissa konsonanter. För den slutgiltiga produktens skull kan dessa krav ändå uppnås med hjälp av konsonantspecifik bearbetning.
Bergmann, Kirsten y Stefan Kopp. "Verbal or visual? : How information is distributed across speech and gesture in spatial dialog". Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/1037/.
Texto completoThis paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
Erdener, Vahit Doğu. "The effect of auditory, visual and orthographic information on second language acquisition /". View thesis View thesis, 2002. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030408.114825/index.html.
Texto completo"A thesis submitted in partial fulfillment of the requirements for the degree of Masters of Arts (Honours), MARCS Auditory Laboratories & School of Psychology, University of Western Sydney, May 2002" Bibliography : leaves 83-93.
Patterson, Robert W. "The effects of inaccurate speech information on performance in a visual search and identification task". Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/30481.
Texto completoErdener, Vahit Dogu, University of Western Sydney, of Arts Education and Social Sciences College y School of Psychology. "The effect of auditory, visual and orthographic information on second language acquisition". THESIS_CAESS_PSY_Erdener_V.xml, 2002. http://handle.uws.edu.au:8081/1959.7/685.
Texto completoMaster of Arts (Hons)
Ostroff, Wendy Louise. "Non-linguistic Influences on Infants' Nonnative Phoneme Perception: Exaggerated prosody and Visual Speech Information Aid Discrimination". Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/27640.
Texto completoPh. D.
Libros sobre el tema "Visual speech information"
Massaro, Dominic W. Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, N.J: Erlbaum Associates, 1987.
Buscar texto completoWayne, Cranton, Fihn Mark y SpringerLink (Online service), eds. Handbook of Visual Display Technology. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Buscar texto completoLearning disabilities sourcebook: Basic consumer health information about dyslexia, dyscalculia, dysgraphia, speech and communication disorders, auditory and visual processing disorders, and other conditions that make learning difficult, including attention deficit hyperactivity disorder, down syndrome and other chromosomal disorders, fetal alcohol spectrum disorders, hearing and visual impairment, autism and other pervasive developmental disorders, and traumatic brain Injury; along with facts about diagnosing learning disabilities, early intervention, the special education process, legal protections, assistive technology, and accommodations, and guidelines for life-stage transitions, suggestions for coping with daily challenges, a glossary of related terms, and a directory of additional resources. 4a ed. Detroit, MI: Omnigraphics, 2012.
Buscar texto completoSimpson, Jeffry A. y Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2016.
Buscar texto completoMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Buscar texto completoMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Buscar texto completoSimpson, Jeffry A. y Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Buscar texto completoSimpson, Jeffry A. y Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Buscar texto completoMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Buscar texto completoMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Buscar texto completoCapítulos de libros sobre el tema "Visual speech information"
Pondit, Ashish, Muhammad Eshaque Ali Rukon, Anik Das y Muhammad Ashad Kabir. "BenAV: a Bengali Audio-Visual Corpus for Visual Speech Recognition". En Neural Information Processing, 526–35. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92270-2_45.
Texto completoGupta, Deepika, Preety Singh, V. Laxmi y Manoj S. Gaur. "Boundary Descriptors for Visual Speech Recognition". En Computer and Information Sciences II, 307–13. London: Springer London, 2011. http://dx.doi.org/10.1007/978-1-4471-2155-8_39.
Texto completoGiachanou, Anastasia, Guobiao Zhang y Paolo Rosso. "Multimodal Fake News Detection with Textual, Visual and Semantic Information". En Text, Speech, and Dialogue, 30–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58323-1_3.
Texto completoWu, Shiow-yang y Wen-Shen Chen. "Oral-Query-by-Sketch: An XML-Based Framework for Speech Access to Image Databases". En Visual and Multimedia Information Management, 341–55. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-0-387-35592-4_24.
Texto completoSagheer, Alaa y Saleh Aly. "Integration of Face Detection and User Identification with Visual Speech Recognition". En Neural Information Processing, 479–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34500-5_57.
Texto completoGui, Jiaping y Shilin Wang. "Shape Feature Analysis for Visual Speech and Speaker Recognition". En Communications in Computer and Information Science, 167–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23235-0_22.
Texto completoNakamura, Satoshi. "Fusion of Audio-Visual Information for Integrated Speech Processing". En Lecture Notes in Computer Science, 127–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45344-x_20.
Texto completoFoo, Say Wei y Liang Dong. "Recognition of Visual Speech Elements Using Hidden Markov Models". En Advances in Multimedia Information Processing — PCM 2002, 607–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36228-2_75.
Texto completoBastanfard, Azam, Mohammad Aghaahmadi, Alireza Abdi kelishami, Maryam Fazel y Maedeh Moghadam. "Persian Viseme Classification for Developing Visual Speech Training Application". En Advances in Multimedia Information Processing - PCM 2009, 1080–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10467-1_104.
Texto completoGaluščáková, Petra, Pavel Pecina y Jan Hajič. "Penalty Functions for Evaluation Measures of Unsegmented Speech Retrieval". En Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics, 100–111. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33247-0_12.
Texto completoActas de conferencias sobre el tema "Visual speech information"
Akdemir, Eren y Tolga Ciloglu. "Using visual information in automatic speech segmentation". En 2008 IEEE 16th Signal Processing, Communication and Applications Conference (SIU). IEEE, 2008. http://dx.doi.org/10.1109/siu.2008.4632641.
Texto completoKawase, Saya, Jeesun Kim, Vincent Aubanel y Chris Davis. "Perceiving foreign-accented auditory-visual speech in noise: The influence of visual form and timing information". En Speech Prosody 2016. ISCA, 2016. http://dx.doi.org/10.21437/speechprosody.2016-99.
Texto completoChen, Tsuhan, H. P. Graf, Homer H. Chen, Wu Chou, Barry G. Haskell, Eric D. Petajan y Yao Wang. "Lip synchronization in talking head video utilizing speech information". En Visual Communications and Image Processing '95, editado por Lance T. Wu. SPIE, 1995. http://dx.doi.org/10.1117/12.206706.
Texto completoJiang, Wei, Lexing Xie y Shih-Fu Chang. "Visual saliency with side information". En ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4959946.
Texto completoKarabalkan, H. y H. Erdogan. "Information fusion techniques in Audio-Visual Speech Recognition". En 2009 IEEE 17th Signal Processing and Communications Applications Conference (SIU). IEEE, 2009. http://dx.doi.org/10.1109/siu.2009.5136443.
Texto completoMaulana, Muhammad Rizki Aulia Rahman y Mohamad Ivan Fanany. "Indonesian audio-visual speech corpus for multimodal automatic speech recognition". En 2017 International Conference on Advanced Computer Science and Information Systems (ICACSIS). IEEE, 2017. http://dx.doi.org/10.1109/icacsis.2017.8355062.
Texto completoAubanel, Vincent, Cassandra Masters, Jeesun Kim y Chris Davis. "Contribution of visual rhythmic information to speech perception in noise". En The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-18.
Texto completoHui Zhao y Chaojing Tang. "Visual speech synthesis based on Chinese dynamic visemes". En 2008 International Conference on Information and Automation (ICIA). IEEE, 2008. http://dx.doi.org/10.1109/icinfa.2008.4607983.
Texto completoLuo, Yiyu, Jing Wang, Xinyao Wang, Liang Wen y Lizhong Wang. "Audio-Visual Speech Separation Using I-Vectors". En 2019 IEEE 2nd International Conference on Information Communication and Signal Processing (ICICSP). IEEE, 2019. http://dx.doi.org/10.1109/icicsp48821.2019.8958547.
Texto completoLu, Longbin, Xinman Zhang y Xuebin Xu. "Fusion of face and visual speech information for identity verification". En 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS). IEEE, 2017. http://dx.doi.org/10.1109/ispacs.2017.8266530.
Texto completoInformes sobre el tema "Visual speech information"
Yatsymirska, Mariya. SOCIAL EXPRESSION IN MULTIMEDIA TEXTS. Ivan Franko National University of Lviv, febrero de 2021. http://dx.doi.org/10.30970/vjo.2021.49.11072.
Texto completo