Letteratura scientifica selezionata sul tema "Visual speech information"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Visual speech information".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Visual speech information"
Miller, Rachel M., Kauyumari Sanchez e Lawrence D. Rosenblum. "Alignment to visual speech information". Attention, Perception, & Psychophysics 72, n. 6 (agosto 2010): 1614–25. http://dx.doi.org/10.3758/app.72.6.1614.
Testo completoRosenblum, Lawrence D., Deborah A. Yakel, Naser Baseer, Anjani Panchal, Brynn C. Nodarse e Ryan P. Niehus. "Visual speech information for face recognition". Perception & Psychophysics 64, n. 2 (febbraio 2002): 220–29. http://dx.doi.org/10.3758/bf03195788.
Testo completoYakel, Deborah A., e Lawrence D. Rosenblum. "Face identification using visual speech information". Journal of the Acoustical Society of America 100, n. 4 (ottobre 1996): 2570. http://dx.doi.org/10.1121/1.417401.
Testo completoWeinholtz, Chase, e James W. Dias. "Categorical perception of visual speech information". Journal of the Acoustical Society of America 139, n. 4 (aprile 2016): 2018. http://dx.doi.org/10.1121/1.4949950.
Testo completoHISANAGA, Satoko, Kaoru SEKIYAMA, Tomohiko IGASAKI e Nobuki MURAYAMA. "Effects of visual information on audio-visual speech processing". Proceedings of the Annual Convention of the Japanese Psychological Association 75 (15 settembre 2011): 2AM061. http://dx.doi.org/10.4992/pacjpa.75.0_2am061.
Testo completoSell, Andrea J., e Michael P. Kaschak. "Does visual speech information affect word segmentation?" Memory & Cognition 37, n. 6 (settembre 2009): 889–94. http://dx.doi.org/10.3758/mc.37.6.889.
Testo completoHall, Michael D., Paula M. T. Smeele e Patricia K. Kuhl. "Integration of auditory and visual speech information". Journal of the Acoustical Society of America 103, n. 5 (maggio 1998): 2985. http://dx.doi.org/10.1121/1.421677.
Testo completoMcGiverin, Rolland. "Speech, Hearing and Visual". Behavioral & Social Sciences Librarian 8, n. 3-4 (16 aprile 1990): 73–78. http://dx.doi.org/10.1300/j103v08n03_12.
Testo completoHollich, George J., Peter W. Jusczyk e Rochelle S. Newman. "Infants use of visual information in speech segmentation". Journal of the Acoustical Society of America 110, n. 5 (novembre 2001): 2703. http://dx.doi.org/10.1121/1.4777318.
Testo completoTekin, Ender, James Coughlan e Helen Simon. "Improving speech enhancement algorithms by incorporating visual information". Journal of the Acoustical Society of America 134, n. 5 (novembre 2013): 4237. http://dx.doi.org/10.1121/1.4831575.
Testo completoTesi sul tema "Visual speech information"
Le, Cornu Thomas. "Reconstruction of intelligible audio speech from visual speech information". Thesis, University of East Anglia, 2016. https://ueaeprints.uea.ac.uk/67012/.
Testo completoAndrews, Brandie. "Auditory and visual information facilitating speech integration". Connect to resource, 2007. http://hdl.handle.net/1811/25202.
Testo completoTitle from first page of PDF file. Document formatted into pages: contains 43 p.; also includes graphics. Includes bibliographical references (p. 27-28). Available online via Ohio State University's Knowledge Bank.
Fixmer, Eric Norbert Charles. "Grouping of auditory and visual information in speech". Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612553.
Testo completoKeintz, Constance Kay. "Influence of visual information on the intelligibility of dysarthric speech". Diss., The University of Arizona, 2005. http://hdl.handle.net/10150/280714.
Testo completoHagrot, Joel. "A Data-Driven Approach For Automatic Visual Speech In Swedish Speech Synthesis Applications". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246393.
Testo completoDetta projekt utreder hur artificiella neuronnät kan användas för visuell talsyntes. Ändamålet var att ta fram ett ramverk för animerade chatbotar på svenska. En översikt över litteraturen kom fram till att state-of-the-art-metoden var att använda artificiella neuronnät med antingen ljud eller fonemsekvenser som indata. Tre enkäter genomfördes, både i den slutgiltiga produktens kontext, samt i en mer neutral kontext med mindre bearbetning. De jämförde sanningsdatat, inspelat med iPhone X:s djupsensorkamera, med både neuronnätsmodellen och en grundläggande så kallad baselinemodell. Den statistiska analysen använde mixed effects-modeller för att hitta statistiskt signifikanta skillnader i resultaten. Även den temporala dynamiken analyserades. Resultaten visar att ett relativt enkelt neuronnät kunde lära sig att generera blendshapesekvenser utifrån fonemsekvenser med tillfredsställande resultat, förutom att krav såsom läppslutning för vissa konsonanter inte alltid uppfylldes. Problemen med konsonanter kunde också i viss mån ses i sanningsdatat. Detta kunde lösas med hjälp av konsonantspecifik bearbetning, vilket gjorde att neuronnätets animationer var oskiljbara från sanningsdatat och att de samtidigt upplevdes vara bättre än baselinemodellens animationer. Sammanfattningsvis så lärde sig neuronnätet vokaler väl, men hade antagligen behövt mer data för att på ett tillfredsställande sätt uppfylla kraven för vissa konsonanter. För den slutgiltiga produktens skull kan dessa krav ändå uppnås med hjälp av konsonantspecifik bearbetning.
Bergmann, Kirsten, e Stefan Kopp. "Verbal or visual? : How information is distributed across speech and gesture in spatial dialog". Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/1037/.
Testo completoThis paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
Erdener, Vahit Doğu. "The effect of auditory, visual and orthographic information on second language acquisition /". View thesis View thesis, 2002. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030408.114825/index.html.
Testo completo"A thesis submitted in partial fulfillment of the requirements for the degree of Masters of Arts (Honours), MARCS Auditory Laboratories & School of Psychology, University of Western Sydney, May 2002" Bibliography : leaves 83-93.
Patterson, Robert W. "The effects of inaccurate speech information on performance in a visual search and identification task". Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/30481.
Testo completoErdener, Vahit Dogu, University of Western Sydney, of Arts Education and Social Sciences College e School of Psychology. "The effect of auditory, visual and orthographic information on second language acquisition". THESIS_CAESS_PSY_Erdener_V.xml, 2002. http://handle.uws.edu.au:8081/1959.7/685.
Testo completoMaster of Arts (Hons)
Ostroff, Wendy Louise. "Non-linguistic Influences on Infants' Nonnative Phoneme Perception: Exaggerated prosody and Visual Speech Information Aid Discrimination". Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/27640.
Testo completoPh. D.
Libri sul tema "Visual speech information"
Massaro, Dominic W. Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, N.J: Erlbaum Associates, 1987.
Cerca il testo completoWayne, Cranton, Fihn Mark e SpringerLink (Online service), a cura di. Handbook of Visual Display Technology. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Cerca il testo completoLearning disabilities sourcebook: Basic consumer health information about dyslexia, dyscalculia, dysgraphia, speech and communication disorders, auditory and visual processing disorders, and other conditions that make learning difficult, including attention deficit hyperactivity disorder, down syndrome and other chromosomal disorders, fetal alcohol spectrum disorders, hearing and visual impairment, autism and other pervasive developmental disorders, and traumatic brain Injury; along with facts about diagnosing learning disabilities, early intervention, the special education process, legal protections, assistive technology, and accommodations, and guidelines for life-stage transitions, suggestions for coping with daily challenges, a glossary of related terms, and a directory of additional resources. 4a ed. Detroit, MI: Omnigraphics, 2012.
Cerca il testo completoSimpson, Jeffry A., e Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2016.
Cerca il testo completoMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Cerca il testo completoMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Cerca il testo completoSimpson, Jeffry A., e Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Cerca il testo completoSimpson, Jeffry A., e Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Cerca il testo completoMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Cerca il testo completoMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Cerca il testo completoCapitoli di libri sul tema "Visual speech information"
Pondit, Ashish, Muhammad Eshaque Ali Rukon, Anik Das e Muhammad Ashad Kabir. "BenAV: a Bengali Audio-Visual Corpus for Visual Speech Recognition". In Neural Information Processing, 526–35. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92270-2_45.
Testo completoGupta, Deepika, Preety Singh, V. Laxmi e Manoj S. Gaur. "Boundary Descriptors for Visual Speech Recognition". In Computer and Information Sciences II, 307–13. London: Springer London, 2011. http://dx.doi.org/10.1007/978-1-4471-2155-8_39.
Testo completoGiachanou, Anastasia, Guobiao Zhang e Paolo Rosso. "Multimodal Fake News Detection with Textual, Visual and Semantic Information". In Text, Speech, and Dialogue, 30–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58323-1_3.
Testo completoWu, Shiow-yang, e Wen-Shen Chen. "Oral-Query-by-Sketch: An XML-Based Framework for Speech Access to Image Databases". In Visual and Multimedia Information Management, 341–55. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-0-387-35592-4_24.
Testo completoSagheer, Alaa, e Saleh Aly. "Integration of Face Detection and User Identification with Visual Speech Recognition". In Neural Information Processing, 479–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34500-5_57.
Testo completoGui, Jiaping, e Shilin Wang. "Shape Feature Analysis for Visual Speech and Speaker Recognition". In Communications in Computer and Information Science, 167–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23235-0_22.
Testo completoNakamura, Satoshi. "Fusion of Audio-Visual Information for Integrated Speech Processing". In Lecture Notes in Computer Science, 127–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45344-x_20.
Testo completoFoo, Say Wei, e Liang Dong. "Recognition of Visual Speech Elements Using Hidden Markov Models". In Advances in Multimedia Information Processing — PCM 2002, 607–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36228-2_75.
Testo completoBastanfard, Azam, Mohammad Aghaahmadi, Alireza Abdi kelishami, Maryam Fazel e Maedeh Moghadam. "Persian Viseme Classification for Developing Visual Speech Training Application". In Advances in Multimedia Information Processing - PCM 2009, 1080–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10467-1_104.
Testo completoGaluščáková, Petra, Pavel Pecina e Jan Hajič. "Penalty Functions for Evaluation Measures of Unsegmented Speech Retrieval". In Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics, 100–111. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33247-0_12.
Testo completoAtti di convegni sul tema "Visual speech information"
Akdemir, Eren, e Tolga Ciloglu. "Using visual information in automatic speech segmentation". In 2008 IEEE 16th Signal Processing, Communication and Applications Conference (SIU). IEEE, 2008. http://dx.doi.org/10.1109/siu.2008.4632641.
Testo completoKawase, Saya, Jeesun Kim, Vincent Aubanel e Chris Davis. "Perceiving foreign-accented auditory-visual speech in noise: The influence of visual form and timing information". In Speech Prosody 2016. ISCA, 2016. http://dx.doi.org/10.21437/speechprosody.2016-99.
Testo completoChen, Tsuhan, H. P. Graf, Homer H. Chen, Wu Chou, Barry G. Haskell, Eric D. Petajan e Yao Wang. "Lip synchronization in talking head video utilizing speech information". In Visual Communications and Image Processing '95, a cura di Lance T. Wu. SPIE, 1995. http://dx.doi.org/10.1117/12.206706.
Testo completoJiang, Wei, Lexing Xie e Shih-Fu Chang. "Visual saliency with side information". In ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4959946.
Testo completoKarabalkan, H., e H. Erdogan. "Information fusion techniques in Audio-Visual Speech Recognition". In 2009 IEEE 17th Signal Processing and Communications Applications Conference (SIU). IEEE, 2009. http://dx.doi.org/10.1109/siu.2009.5136443.
Testo completoMaulana, Muhammad Rizki Aulia Rahman, e Mohamad Ivan Fanany. "Indonesian audio-visual speech corpus for multimodal automatic speech recognition". In 2017 International Conference on Advanced Computer Science and Information Systems (ICACSIS). IEEE, 2017. http://dx.doi.org/10.1109/icacsis.2017.8355062.
Testo completoAubanel, Vincent, Cassandra Masters, Jeesun Kim e Chris Davis. "Contribution of visual rhythmic information to speech perception in noise". In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-18.
Testo completoHui Zhao e Chaojing Tang. "Visual speech synthesis based on Chinese dynamic visemes". In 2008 International Conference on Information and Automation (ICIA). IEEE, 2008. http://dx.doi.org/10.1109/icinfa.2008.4607983.
Testo completoLuo, Yiyu, Jing Wang, Xinyao Wang, Liang Wen e Lizhong Wang. "Audio-Visual Speech Separation Using I-Vectors". In 2019 IEEE 2nd International Conference on Information Communication and Signal Processing (ICICSP). IEEE, 2019. http://dx.doi.org/10.1109/icicsp48821.2019.8958547.
Testo completoLu, Longbin, Xinman Zhang e Xuebin Xu. "Fusion of face and visual speech information for identity verification". In 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS). IEEE, 2017. http://dx.doi.org/10.1109/ispacs.2017.8266530.
Testo completoRapporti di organizzazioni sul tema "Visual speech information"
Yatsymirska, Mariya. SOCIAL EXPRESSION IN MULTIMEDIA TEXTS. Ivan Franko National University of Lviv, febbraio 2021. http://dx.doi.org/10.30970/vjo.2021.49.11072.
Testo completo