Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Visual speech information“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Visual speech information" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Visual speech information"
Miller, Rachel M., Kauyumari Sanchez und Lawrence D. Rosenblum. „Alignment to visual speech information“. Attention, Perception, & Psychophysics 72, Nr. 6 (August 2010): 1614–25. http://dx.doi.org/10.3758/app.72.6.1614.
Der volle Inhalt der QuelleRosenblum, Lawrence D., Deborah A. Yakel, Naser Baseer, Anjani Panchal, Brynn C. Nodarse und Ryan P. Niehus. „Visual speech information for face recognition“. Perception & Psychophysics 64, Nr. 2 (Februar 2002): 220–29. http://dx.doi.org/10.3758/bf03195788.
Der volle Inhalt der QuelleYakel, Deborah A., und Lawrence D. Rosenblum. „Face identification using visual speech information“. Journal of the Acoustical Society of America 100, Nr. 4 (Oktober 1996): 2570. http://dx.doi.org/10.1121/1.417401.
Der volle Inhalt der QuelleWeinholtz, Chase, und James W. Dias. „Categorical perception of visual speech information“. Journal of the Acoustical Society of America 139, Nr. 4 (April 2016): 2018. http://dx.doi.org/10.1121/1.4949950.
Der volle Inhalt der QuelleHISANAGA, Satoko, Kaoru SEKIYAMA, Tomohiko IGASAKI und Nobuki MURAYAMA. „Effects of visual information on audio-visual speech processing“. Proceedings of the Annual Convention of the Japanese Psychological Association 75 (15.09.2011): 2AM061. http://dx.doi.org/10.4992/pacjpa.75.0_2am061.
Der volle Inhalt der QuelleSell, Andrea J., und Michael P. Kaschak. „Does visual speech information affect word segmentation?“ Memory & Cognition 37, Nr. 6 (September 2009): 889–94. http://dx.doi.org/10.3758/mc.37.6.889.
Der volle Inhalt der QuelleHall, Michael D., Paula M. T. Smeele und Patricia K. Kuhl. „Integration of auditory and visual speech information“. Journal of the Acoustical Society of America 103, Nr. 5 (Mai 1998): 2985. http://dx.doi.org/10.1121/1.421677.
Der volle Inhalt der QuelleMcGiverin, Rolland. „Speech, Hearing and Visual“. Behavioral & Social Sciences Librarian 8, Nr. 3-4 (16.04.1990): 73–78. http://dx.doi.org/10.1300/j103v08n03_12.
Der volle Inhalt der QuelleHollich, George J., Peter W. Jusczyk und Rochelle S. Newman. „Infants use of visual information in speech segmentation“. Journal of the Acoustical Society of America 110, Nr. 5 (November 2001): 2703. http://dx.doi.org/10.1121/1.4777318.
Der volle Inhalt der QuelleTekin, Ender, James Coughlan und Helen Simon. „Improving speech enhancement algorithms by incorporating visual information“. Journal of the Acoustical Society of America 134, Nr. 5 (November 2013): 4237. http://dx.doi.org/10.1121/1.4831575.
Der volle Inhalt der QuelleDissertationen zum Thema "Visual speech information"
Le, Cornu Thomas. „Reconstruction of intelligible audio speech from visual speech information“. Thesis, University of East Anglia, 2016. https://ueaeprints.uea.ac.uk/67012/.
Der volle Inhalt der QuelleAndrews, Brandie. „Auditory and visual information facilitating speech integration“. Connect to resource, 2007. http://hdl.handle.net/1811/25202.
Der volle Inhalt der QuelleTitle from first page of PDF file. Document formatted into pages: contains 43 p.; also includes graphics. Includes bibliographical references (p. 27-28). Available online via Ohio State University's Knowledge Bank.
Fixmer, Eric Norbert Charles. „Grouping of auditory and visual information in speech“. Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612553.
Der volle Inhalt der QuelleKeintz, Constance Kay. „Influence of visual information on the intelligibility of dysarthric speech“. Diss., The University of Arizona, 2005. http://hdl.handle.net/10150/280714.
Der volle Inhalt der QuelleHagrot, Joel. „A Data-Driven Approach For Automatic Visual Speech In Swedish Speech Synthesis Applications“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246393.
Der volle Inhalt der QuelleDetta projekt utreder hur artificiella neuronnät kan användas för visuell talsyntes. Ändamålet var att ta fram ett ramverk för animerade chatbotar på svenska. En översikt över litteraturen kom fram till att state-of-the-art-metoden var att använda artificiella neuronnät med antingen ljud eller fonemsekvenser som indata. Tre enkäter genomfördes, både i den slutgiltiga produktens kontext, samt i en mer neutral kontext med mindre bearbetning. De jämförde sanningsdatat, inspelat med iPhone X:s djupsensorkamera, med både neuronnätsmodellen och en grundläggande så kallad baselinemodell. Den statistiska analysen använde mixed effects-modeller för att hitta statistiskt signifikanta skillnader i resultaten. Även den temporala dynamiken analyserades. Resultaten visar att ett relativt enkelt neuronnät kunde lära sig att generera blendshapesekvenser utifrån fonemsekvenser med tillfredsställande resultat, förutom att krav såsom läppslutning för vissa konsonanter inte alltid uppfylldes. Problemen med konsonanter kunde också i viss mån ses i sanningsdatat. Detta kunde lösas med hjälp av konsonantspecifik bearbetning, vilket gjorde att neuronnätets animationer var oskiljbara från sanningsdatat och att de samtidigt upplevdes vara bättre än baselinemodellens animationer. Sammanfattningsvis så lärde sig neuronnätet vokaler väl, men hade antagligen behövt mer data för att på ett tillfredsställande sätt uppfylla kraven för vissa konsonanter. För den slutgiltiga produktens skull kan dessa krav ändå uppnås med hjälp av konsonantspecifik bearbetning.
Bergmann, Kirsten, und Stefan Kopp. „Verbal or visual? : How information is distributed across speech and gesture in spatial dialog“. Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/1037/.
Der volle Inhalt der QuelleThis paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
Erdener, Vahit Doğu. „The effect of auditory, visual and orthographic information on second language acquisition /“. View thesis View thesis, 2002. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030408.114825/index.html.
Der volle Inhalt der Quelle"A thesis submitted in partial fulfillment of the requirements for the degree of Masters of Arts (Honours), MARCS Auditory Laboratories & School of Psychology, University of Western Sydney, May 2002" Bibliography : leaves 83-93.
Patterson, Robert W. „The effects of inaccurate speech information on performance in a visual search and identification task“. Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/30481.
Der volle Inhalt der QuelleErdener, Vahit Dogu, University of Western Sydney, of Arts Education and Social Sciences College und School of Psychology. „The effect of auditory, visual and orthographic information on second language acquisition“. THESIS_CAESS_PSY_Erdener_V.xml, 2002. http://handle.uws.edu.au:8081/1959.7/685.
Der volle Inhalt der QuelleMaster of Arts (Hons)
Ostroff, Wendy Louise. „Non-linguistic Influences on Infants' Nonnative Phoneme Perception: Exaggerated prosody and Visual Speech Information Aid Discrimination“. Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/27640.
Der volle Inhalt der QuellePh. D.
Bücher zum Thema "Visual speech information"
Massaro, Dominic W. Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, N.J: Erlbaum Associates, 1987.
Den vollen Inhalt der Quelle findenWayne, Cranton, Fihn Mark und SpringerLink (Online service), Hrsg. Handbook of Visual Display Technology. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Den vollen Inhalt der Quelle findenLearning disabilities sourcebook: Basic consumer health information about dyslexia, dyscalculia, dysgraphia, speech and communication disorders, auditory and visual processing disorders, and other conditions that make learning difficult, including attention deficit hyperactivity disorder, down syndrome and other chromosomal disorders, fetal alcohol spectrum disorders, hearing and visual impairment, autism and other pervasive developmental disorders, and traumatic brain Injury; along with facts about diagnosing learning disabilities, early intervention, the special education process, legal protections, assistive technology, and accommodations, and guidelines for life-stage transitions, suggestions for coping with daily challenges, a glossary of related terms, and a directory of additional resources. 4. Aufl. Detroit, MI: Omnigraphics, 2012.
Den vollen Inhalt der Quelle findenSimpson, Jeffry A., und Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2016.
Den vollen Inhalt der Quelle findenMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Den vollen Inhalt der Quelle findenMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Den vollen Inhalt der Quelle findenSimpson, Jeffry A., und Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Den vollen Inhalt der Quelle findenSimpson, Jeffry A., und Dominic W. Massaro. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Den vollen Inhalt der Quelle findenMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Den vollen Inhalt der Quelle findenMassaro, Dominic W. Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Taylor & Francis Group, 2014.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Visual speech information"
Pondit, Ashish, Muhammad Eshaque Ali Rukon, Anik Das und Muhammad Ashad Kabir. „BenAV: a Bengali Audio-Visual Corpus for Visual Speech Recognition“. In Neural Information Processing, 526–35. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92270-2_45.
Der volle Inhalt der QuelleGupta, Deepika, Preety Singh, V. Laxmi und Manoj S. Gaur. „Boundary Descriptors for Visual Speech Recognition“. In Computer and Information Sciences II, 307–13. London: Springer London, 2011. http://dx.doi.org/10.1007/978-1-4471-2155-8_39.
Der volle Inhalt der QuelleGiachanou, Anastasia, Guobiao Zhang und Paolo Rosso. „Multimodal Fake News Detection with Textual, Visual and Semantic Information“. In Text, Speech, and Dialogue, 30–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58323-1_3.
Der volle Inhalt der QuelleWu, Shiow-yang, und Wen-Shen Chen. „Oral-Query-by-Sketch: An XML-Based Framework for Speech Access to Image Databases“. In Visual and Multimedia Information Management, 341–55. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-0-387-35592-4_24.
Der volle Inhalt der QuelleSagheer, Alaa, und Saleh Aly. „Integration of Face Detection and User Identification with Visual Speech Recognition“. In Neural Information Processing, 479–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34500-5_57.
Der volle Inhalt der QuelleGui, Jiaping, und Shilin Wang. „Shape Feature Analysis for Visual Speech and Speaker Recognition“. In Communications in Computer and Information Science, 167–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23235-0_22.
Der volle Inhalt der QuelleNakamura, Satoshi. „Fusion of Audio-Visual Information for Integrated Speech Processing“. In Lecture Notes in Computer Science, 127–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45344-x_20.
Der volle Inhalt der QuelleFoo, Say Wei, und Liang Dong. „Recognition of Visual Speech Elements Using Hidden Markov Models“. In Advances in Multimedia Information Processing — PCM 2002, 607–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36228-2_75.
Der volle Inhalt der QuelleBastanfard, Azam, Mohammad Aghaahmadi, Alireza Abdi kelishami, Maryam Fazel und Maedeh Moghadam. „Persian Viseme Classification for Developing Visual Speech Training Application“. In Advances in Multimedia Information Processing - PCM 2009, 1080–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10467-1_104.
Der volle Inhalt der QuelleGaluščáková, Petra, Pavel Pecina und Jan Hajič. „Penalty Functions for Evaluation Measures of Unsegmented Speech Retrieval“. In Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics, 100–111. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33247-0_12.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Visual speech information"
Akdemir, Eren, und Tolga Ciloglu. „Using visual information in automatic speech segmentation“. In 2008 IEEE 16th Signal Processing, Communication and Applications Conference (SIU). IEEE, 2008. http://dx.doi.org/10.1109/siu.2008.4632641.
Der volle Inhalt der QuelleKawase, Saya, Jeesun Kim, Vincent Aubanel und Chris Davis. „Perceiving foreign-accented auditory-visual speech in noise: The influence of visual form and timing information“. In Speech Prosody 2016. ISCA, 2016. http://dx.doi.org/10.21437/speechprosody.2016-99.
Der volle Inhalt der QuelleChen, Tsuhan, H. P. Graf, Homer H. Chen, Wu Chou, Barry G. Haskell, Eric D. Petajan und Yao Wang. „Lip synchronization in talking head video utilizing speech information“. In Visual Communications and Image Processing '95, herausgegeben von Lance T. Wu. SPIE, 1995. http://dx.doi.org/10.1117/12.206706.
Der volle Inhalt der QuelleJiang, Wei, Lexing Xie und Shih-Fu Chang. „Visual saliency with side information“. In ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4959946.
Der volle Inhalt der QuelleKarabalkan, H., und H. Erdogan. „Information fusion techniques in Audio-Visual Speech Recognition“. In 2009 IEEE 17th Signal Processing and Communications Applications Conference (SIU). IEEE, 2009. http://dx.doi.org/10.1109/siu.2009.5136443.
Der volle Inhalt der QuelleMaulana, Muhammad Rizki Aulia Rahman, und Mohamad Ivan Fanany. „Indonesian audio-visual speech corpus for multimodal automatic speech recognition“. In 2017 International Conference on Advanced Computer Science and Information Systems (ICACSIS). IEEE, 2017. http://dx.doi.org/10.1109/icacsis.2017.8355062.
Der volle Inhalt der QuelleAubanel, Vincent, Cassandra Masters, Jeesun Kim und Chris Davis. „Contribution of visual rhythmic information to speech perception in noise“. In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-18.
Der volle Inhalt der QuelleHui Zhao und Chaojing Tang. „Visual speech synthesis based on Chinese dynamic visemes“. In 2008 International Conference on Information and Automation (ICIA). IEEE, 2008. http://dx.doi.org/10.1109/icinfa.2008.4607983.
Der volle Inhalt der QuelleLuo, Yiyu, Jing Wang, Xinyao Wang, Liang Wen und Lizhong Wang. „Audio-Visual Speech Separation Using I-Vectors“. In 2019 IEEE 2nd International Conference on Information Communication and Signal Processing (ICICSP). IEEE, 2019. http://dx.doi.org/10.1109/icicsp48821.2019.8958547.
Der volle Inhalt der QuelleLu, Longbin, Xinman Zhang und Xuebin Xu. „Fusion of face and visual speech information for identity verification“. In 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS). IEEE, 2017. http://dx.doi.org/10.1109/ispacs.2017.8266530.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Visual speech information"
Yatsymirska, Mariya. SOCIAL EXPRESSION IN MULTIMEDIA TEXTS. Ivan Franko National University of Lviv, Februar 2021. http://dx.doi.org/10.30970/vjo.2021.49.11072.
Der volle Inhalt der Quelle