Добірка наукової літератури з теми "Visual speech information"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Visual speech information".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Visual speech information"
Miller, Rachel M., Kauyumari Sanchez, and Lawrence D. Rosenblum. "Alignment to visual speech information." Attention, Perception, & Psychophysics 72, no. 6 (August 2010): 1614–25. http://dx.doi.org/10.3758/app.72.6.1614.
Повний текст джерелаRosenblum, Lawrence D., Deborah A. Yakel, Naser Baseer, Anjani Panchal, Brynn C. Nodarse, and Ryan P. Niehus. "Visual speech information for face recognition." Perception & Psychophysics 64, no. 2 (February 2002): 220–29. http://dx.doi.org/10.3758/bf03195788.
Повний текст джерелаYakel, Deborah A., and Lawrence D. Rosenblum. "Face identification using visual speech information." Journal of the Acoustical Society of America 100, no. 4 (October 1996): 2570. http://dx.doi.org/10.1121/1.417401.
Повний текст джерелаWeinholtz, Chase, and James W. Dias. "Categorical perception of visual speech information." Journal of the Acoustical Society of America 139, no. 4 (April 2016): 2018. http://dx.doi.org/10.1121/1.4949950.
Повний текст джерелаHISANAGA, Satoko, Kaoru SEKIYAMA, Tomohiko IGASAKI, and Nobuki MURAYAMA. "Effects of visual information on audio-visual speech processing." Proceedings of the Annual Convention of the Japanese Psychological Association 75 (September 15, 2011): 2AM061. http://dx.doi.org/10.4992/pacjpa.75.0_2am061.
Повний текст джерелаSell, Andrea J., and Michael P. Kaschak. "Does visual speech information affect word segmentation?" Memory & Cognition 37, no. 6 (September 2009): 889–94. http://dx.doi.org/10.3758/mc.37.6.889.
Повний текст джерелаHall, Michael D., Paula M. T. Smeele, and Patricia K. Kuhl. "Integration of auditory and visual speech information." Journal of the Acoustical Society of America 103, no. 5 (May 1998): 2985. http://dx.doi.org/10.1121/1.421677.
Повний текст джерелаMcGiverin, Rolland. "Speech, Hearing and Visual." Behavioral & Social Sciences Librarian 8, no. 3-4 (April 16, 1990): 73–78. http://dx.doi.org/10.1300/j103v08n03_12.
Повний текст джерелаHollich, George J., Peter W. Jusczyk, and Rochelle S. Newman. "Infants use of visual information in speech segmentation." Journal of the Acoustical Society of America 110, no. 5 (November 2001): 2703. http://dx.doi.org/10.1121/1.4777318.
Повний текст джерелаTekin, Ender, James Coughlan, and Helen Simon. "Improving speech enhancement algorithms by incorporating visual information." Journal of the Acoustical Society of America 134, no. 5 (November 2013): 4237. http://dx.doi.org/10.1121/1.4831575.
Повний текст джерелаДисертації з теми "Visual speech information"
Le, Cornu Thomas. "Reconstruction of intelligible audio speech from visual speech information." Thesis, University of East Anglia, 2016. https://ueaeprints.uea.ac.uk/67012/.
Повний текст джерелаAndrews, Brandie. "Auditory and visual information facilitating speech integration." Connect to resource, 2007. http://hdl.handle.net/1811/25202.
Повний текст джерелаTitle from first page of PDF file. Document formatted into pages: contains 43 p.; also includes graphics. Includes bibliographical references (p. 27-28). Available online via Ohio State University's Knowledge Bank.
Fixmer, Eric Norbert Charles. "Grouping of auditory and visual information in speech." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612553.
Повний текст джерелаKeintz, Constance Kay. "Influence of visual information on the intelligibility of dysarthric speech." Diss., The University of Arizona, 2005. http://hdl.handle.net/10150/280714.
Повний текст джерелаHagrot, Joel. "A Data-Driven Approach For Automatic Visual Speech In Swedish Speech Synthesis Applications." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246393.
Повний текст джерелаDetta projekt utreder hur artificiella neuronnät kan användas för visuell talsyntes. Ändamålet var att ta fram ett ramverk för animerade chatbotar på svenska. En översikt över litteraturen kom fram till att state-of-the-art-metoden var att använda artificiella neuronnät med antingen ljud eller fonemsekvenser som indata. Tre enkäter genomfördes, både i den slutgiltiga produktens kontext, samt i en mer neutral kontext med mindre bearbetning. De jämförde sanningsdatat, inspelat med iPhone X:s djupsensorkamera, med både neuronnätsmodellen och en grundläggande så kallad baselinemodell. Den statistiska analysen använde mixed effects-modeller för att hitta statistiskt signifikanta skillnader i resultaten. Även den temporala dynamiken analyserades. Resultaten visar att ett relativt enkelt neuronnät kunde lära sig att generera blendshapesekvenser utifrån fonemsekvenser med tillfredsställande resultat, förutom att krav såsom läppslutning för vissa konsonanter inte alltid uppfylldes. Problemen med konsonanter kunde också i viss mån ses i sanningsdatat. Detta kunde lösas med hjälp av konsonantspecifik bearbetning, vilket gjorde att neuronnätets animationer var oskiljbara från sanningsdatat och att de samtidigt upplevdes vara bättre än baselinemodellens animationer. Sammanfattningsvis så lärde sig neuronnätet vokaler väl, men hade antagligen behövt mer data för att på ett tillfredsställande sätt uppfylla kraven för vissa konsonanter. För den slutgiltiga produktens skull kan dessa krav ändå uppnås med hjälp av konsonantspecifik bearbetning.
Bergmann, Kirsten, and Stefan Kopp. "Verbal or visual? : How information is distributed across speech and gesture in spatial dialog." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/1037/.
Повний текст джерелаThis paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
Erdener, Vahit Doğu. "The effect of auditory, visual and orthographic information on second language acquisition /." View thesis View thesis, 2002. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030408.114825/index.html.
Повний текст джерела"A thesis submitted in partial fulfillment of the requirements for the degree of Masters of Arts (Honours), MARCS Auditory Laboratories & School of Psychology, University of Western Sydney, May 2002" Bibliography : leaves 83-93.
Patterson, Robert W. "The effects of inaccurate speech information on performance in a visual search and identification task." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/30481.
Повний текст джерелаErdener, Vahit Dogu, University of Western Sydney, of Arts Education and Social Sciences College, and School of Psychology. "The effect of auditory, visual and orthographic information on second language acquisition." THESIS_CAESS_PSY_Erdener_V.xml, 2002. http://handle.uws.edu.au:8081/1959.7/685.
Повний текст джерелаMaster of Arts (Hons)
Ostroff, Wendy Louise. "Non-linguistic Influences on Infants' Nonnative Phoneme Perception: Exaggerated prosody and Visual Speech Information Aid Discrimination." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/27640.
Повний текст джерелаPh. D.
Книги з теми "Visual speech information"
Massaro, Dominic W. Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, N.J: Erlbaum Associates, 1987.
Знайти повний текст джерелаWayne, Cranton, Fihn Mark, and SpringerLink (Online service), eds. Handbook of Visual Display Technology. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Знайти повний текст джерелаLearning disabilities sourcebook: Basic consumer health information about dyslexia, dyscalculia, dysgraphia, speech and communication disorders, auditory and visual processing disorders, and other conditions that make learning difficult, including attention deficit hyperactivity disorder, down syndrome and other chromosomal disorders, fetal alcohol spectrum disorders, hearing and visual impairment, autism and other pervasive developmental disorders, and traumatic brain Injury; along with facts about diagnosing learning disabilities, early intervention, the special education process, legal protections, assistive technology, and accommodations, and guidelines for life-stage transitions, suggestions for coping with daily challenges, a glossary of related terms, and a directory of additional resources. 4th ed. Detroit, MI: Omnigraphics, 2012.
Знайти повний текст джерелаSimpson, Jeffry A., and Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2016.
Знайти повний текст джерелаMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Знайти повний текст джерелаMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Знайти повний текст джерелаSimpson, Jeffry A., and Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Знайти повний текст джерелаSimpson, Jeffry A., and Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Знайти повний текст джерелаMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Знайти повний текст джерелаMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Знайти повний текст джерелаЧастини книг з теми "Visual speech information"
Pondit, Ashish, Muhammad Eshaque Ali Rukon, Anik Das, and Muhammad Ashad Kabir. "BenAV: a Bengali Audio-Visual Corpus for Visual Speech Recognition." In Neural Information Processing, 526–35. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92270-2_45.
Повний текст джерелаGupta, Deepika, Preety Singh, V. Laxmi, and Manoj S. Gaur. "Boundary Descriptors for Visual Speech Recognition." In Computer and Information Sciences II, 307–13. London: Springer London, 2011. http://dx.doi.org/10.1007/978-1-4471-2155-8_39.
Повний текст джерелаGiachanou, Anastasia, Guobiao Zhang, and Paolo Rosso. "Multimodal Fake News Detection with Textual, Visual and Semantic Information." In Text, Speech, and Dialogue, 30–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58323-1_3.
Повний текст джерелаWu, Shiow-yang, and Wen-Shen Chen. "Oral-Query-by-Sketch: An XML-Based Framework for Speech Access to Image Databases." In Visual and Multimedia Information Management, 341–55. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-0-387-35592-4_24.
Повний текст джерелаSagheer, Alaa, and Saleh Aly. "Integration of Face Detection and User Identification with Visual Speech Recognition." In Neural Information Processing, 479–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34500-5_57.
Повний текст джерелаGui, Jiaping, and Shilin Wang. "Shape Feature Analysis for Visual Speech and Speaker Recognition." In Communications in Computer and Information Science, 167–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23235-0_22.
Повний текст джерелаNakamura, Satoshi. "Fusion of Audio-Visual Information for Integrated Speech Processing." In Lecture Notes in Computer Science, 127–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45344-x_20.
Повний текст джерелаFoo, Say Wei, and Liang Dong. "Recognition of Visual Speech Elements Using Hidden Markov Models." In Advances in Multimedia Information Processing — PCM 2002, 607–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36228-2_75.
Повний текст джерелаBastanfard, Azam, Mohammad Aghaahmadi, Alireza Abdi kelishami, Maryam Fazel, and Maedeh Moghadam. "Persian Viseme Classification for Developing Visual Speech Training Application." In Advances in Multimedia Information Processing - PCM 2009, 1080–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10467-1_104.
Повний текст джерелаGaluščáková, Petra, Pavel Pecina, and Jan Hajič. "Penalty Functions for Evaluation Measures of Unsegmented Speech Retrieval." In Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics, 100–111. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33247-0_12.
Повний текст джерелаТези доповідей конференцій з теми "Visual speech information"
Akdemir, Eren, and Tolga Ciloglu. "Using visual information in automatic speech segmentation." In 2008 IEEE 16th Signal Processing, Communication and Applications Conference (SIU). IEEE, 2008. http://dx.doi.org/10.1109/siu.2008.4632641.
Повний текст джерелаKawase, Saya, Jeesun Kim, Vincent Aubanel, and Chris Davis. "Perceiving foreign-accented auditory-visual speech in noise: The influence of visual form and timing information." In Speech Prosody 2016. ISCA, 2016. http://dx.doi.org/10.21437/speechprosody.2016-99.
Повний текст джерелаChen, Tsuhan, H. P. Graf, Homer H. Chen, Wu Chou, Barry G. Haskell, Eric D. Petajan, and Yao Wang. "Lip synchronization in talking head video utilizing speech information." In Visual Communications and Image Processing '95, edited by Lance T. Wu. SPIE, 1995. http://dx.doi.org/10.1117/12.206706.
Повний текст джерелаJiang, Wei, Lexing Xie, and Shih-Fu Chang. "Visual saliency with side information." In ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4959946.
Повний текст джерелаKarabalkan, H., and H. Erdogan. "Information fusion techniques in Audio-Visual Speech Recognition." In 2009 IEEE 17th Signal Processing and Communications Applications Conference (SIU). IEEE, 2009. http://dx.doi.org/10.1109/siu.2009.5136443.
Повний текст джерелаMaulana, Muhammad Rizki Aulia Rahman, and Mohamad Ivan Fanany. "Indonesian audio-visual speech corpus for multimodal automatic speech recognition." In 2017 International Conference on Advanced Computer Science and Information Systems (ICACSIS). IEEE, 2017. http://dx.doi.org/10.1109/icacsis.2017.8355062.
Повний текст джерелаAubanel, Vincent, Cassandra Masters, Jeesun Kim, and Chris Davis. "Contribution of visual rhythmic information to speech perception in noise." In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-18.
Повний текст джерелаHui Zhao and Chaojing Tang. "Visual speech synthesis based on Chinese dynamic visemes." In 2008 International Conference on Information and Automation (ICIA). IEEE, 2008. http://dx.doi.org/10.1109/icinfa.2008.4607983.
Повний текст джерелаLuo, Yiyu, Jing Wang, Xinyao Wang, Liang Wen, and Lizhong Wang. "Audio-Visual Speech Separation Using I-Vectors." In 2019 IEEE 2nd International Conference on Information Communication and Signal Processing (ICICSP). IEEE, 2019. http://dx.doi.org/10.1109/icicsp48821.2019.8958547.
Повний текст джерелаLu, Longbin, Xinman Zhang, and Xuebin Xu. "Fusion of face and visual speech information for identity verification." In 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS). IEEE, 2017. http://dx.doi.org/10.1109/ispacs.2017.8266530.
Повний текст джерелаЗвіти організацій з теми "Visual speech information"
Yatsymirska, Mariya. SOCIAL EXPRESSION IN MULTIMEDIA TEXTS. Ivan Franko National University of Lviv, February 2021. http://dx.doi.org/10.30970/vjo.2021.49.11072.
Повний текст джерела