Zeitschriftenartikel zum Thema „Audiovisual speech processing“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Audiovisual speech processing" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Tsuhan Chen. „Audiovisual speech processing“. IEEE Signal Processing Magazine 18, Nr. 1 (2001): 9–21. http://dx.doi.org/10.1109/79.911195.
Vatikiotis-Bateson, Eric, und Takaaki Kuratate. „Overview of audiovisual speech processing“. Acoustical Science and Technology 33, Nr. 3 (2012): 135–41. http://dx.doi.org/10.1250/ast.33.135.
Francisco, Ana A., Alexandra Jesse, Margriet A. Groen und James M. McQueen. „A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia“. Journal of Speech, Language, and Hearing Research 60, Nr. 1 (Januar 2017): 144–58. http://dx.doi.org/10.1044/2016_jslhr-h-15-0375.
Bernstein, Lynne E., Edward T. Auer, Michael Wagner und Curtis W. Ponton. „Spatiotemporal dynamics of audiovisual speech processing“. NeuroImage 39, Nr. 1 (Januar 2008): 423–35. http://dx.doi.org/10.1016/j.neuroimage.2007.08.035.
Sams, M. „Audiovisual Speech Perception“. Perception 26, Nr. 1_suppl (August 1997): 347. http://dx.doi.org/10.1068/v970029.
Ojanen, Ville, Riikka Möttönen, Johanna Pekkola, Iiro P. Jääskeläinen, Raimo Joensuu, Taina Autti und Mikko Sams. „Processing of audiovisual speech in Broca's area“. NeuroImage 25, Nr. 2 (April 2005): 333–38. http://dx.doi.org/10.1016/j.neuroimage.2004.12.001.
Stevenson, Ryan A., Nicholas A. Altieri, Sunah Kim, David B. Pisoni und Thomas W. James. „Neural processing of asynchronous audiovisual speech perception“. NeuroImage 49, Nr. 4 (Februar 2010): 3308–18. http://dx.doi.org/10.1016/j.neuroimage.2009.12.001.
Hamilton, Roy H., Jeffrey T. Shenton und H. Branch Coslett. „An acquired deficit of audiovisual speech processing“. Brain and Language 98, Nr. 1 (Juli 2006): 66–73. http://dx.doi.org/10.1016/j.bandl.2006.02.001.
Dunham-Carr, Kacie, Jacob I. Feldman, David M. Simon, Sarah R. Edmunds, Alexander Tu, Wayne Kuang, Julie G. Conrad, Pooja Santapuram, Mark T. Wallace und Tiffany G. Woynaroski. „The Processing of Audiovisual Speech Is Linked with Vocabulary in Autistic and Nonautistic Children: An ERP Study“. Brain Sciences 13, Nr. 7 (08.07.2023): 1043. http://dx.doi.org/10.3390/brainsci13071043.
Tomalski, Przemysław. „Developmental Trajectory of Audiovisual Speech Integration in Early Infancy. A Review of Studies Using the McGurk Paradigm“. Psychology of Language and Communication 19, Nr. 2 (01.10.2015): 77–100. http://dx.doi.org/10.1515/plc-2015-0006.
Ozker, Muge, Inga M. Schepers, John F. Magnotti, Daniel Yoshor und Michael S. Beauchamp. „A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography“. Journal of Cognitive Neuroscience 29, Nr. 6 (Juni 2017): 1044–60. http://dx.doi.org/10.1162/jocn_a_01110.
Simon, David M., und Mark T. Wallace. „Integration and Temporal Processing of Asynchronous Audiovisual Speech“. Journal of Cognitive Neuroscience 30, Nr. 3 (März 2018): 319–37. http://dx.doi.org/10.1162/jocn_a_01205.
de la Vaux, Steven K., und Dominic W. Massaro. „Audiovisual speech gating: examining information and information processing“. Cognitive Processing 5, Nr. 2 (23.04.2004): 106–12. http://dx.doi.org/10.1007/s10339-004-0014-2.
Alsius, Agnès, Martin Paré und Kevin G. Munhall. „Forty Years After Hearing Lips and Seeing Voices: the McGurk Effect Revisited“. Multisensory Research 31, Nr. 1-2 (2018): 111–44. http://dx.doi.org/10.1163/22134808-00002565.
Moradi, Shahram, und Jerker Rönnberg. „Perceptual Doping: A Hypothesis on How Early Audiovisual Speech Stimulation Enhances Subsequent Auditory Speech Processing“. Brain Sciences 13, Nr. 4 (01.04.2023): 601. http://dx.doi.org/10.3390/brainsci13040601.
Ujiie, Yuta, und Kohske Takahashi. „Weaker McGurk Effect for Rubin’s Vase-Type Speech in People With High Autistic Traits“. Multisensory Research 34, Nr. 6 (16.04.2021): 663–79. http://dx.doi.org/10.1163/22134808-bja10047.
Drebing, Daniel, Jared Medina, H. Branch Coslett, Jeffrey T. Shenton und Roy H. Hamilton. „An acquired deficit of intermodal temporal processing for audiovisual speech: A case study“. Seeing and Perceiving 25 (2012): 186. http://dx.doi.org/10.1163/187847612x648152.
Mishra, Sushmit, Thomas Lunner, Stefan Stenfelt, Jerker Rönnberg und Mary Rudner. „Visual Information Can Hinder Working Memory Processing of Speech“. Journal of Speech, Language, and Hearing Research 56, Nr. 4 (August 2013): 1120–32. http://dx.doi.org/10.1044/1092-4388(2012/12-0033).
Thézé, Raphaël, Anne-Lise Giraud und Pierre Mégevand. „The phase of cortical oscillations determines the perceptual fate of visual cues in naturalistic audiovisual speech“. Science Advances 6, Nr. 45 (November 2020): eabc6348. http://dx.doi.org/10.1126/sciadv.abc6348.
Hertrich, Ingo, Hermann Ackermann, Klaus Mathiak und Werner Lutzenberger. „Early stages of audiovisual speech processing—a magnetoencephalography study“. Journal of the Acoustical Society of America 121, Nr. 5 (Mai 2007): 3044. http://dx.doi.org/10.1121/1.4781737.
Harwood, Vanessa, Alisa Baron, Daniel Kleinman, Luca Campanelli, Julia Irwin und Nicole Landi. „Event-Related Potentials in Assessing Visual Speech Cues in the Broader Autism Phenotype: Evidence from a Phonemic Restoration Paradigm“. Brain Sciences 13, Nr. 7 (30.06.2023): 1011. http://dx.doi.org/10.3390/brainsci13071011.
Vroomen, Jean, und Jeroen J. Stekelenburg. „Visual Anticipatory Information Modulates Multisensory Interactions of Artificial Audiovisual Stimuli“. Journal of Cognitive Neuroscience 22, Nr. 7 (Juli 2010): 1583–96. http://dx.doi.org/10.1162/jocn.2009.21308.
Ghaneirad, Erfan, Ellyn Saenger, Gregor R. Szycik, Anja Čuš, Laura Möde, Christopher Sinke, Daniel Wiswede, Stefan Bleich und Anna Borgolte. „Deficient Audiovisual Speech Perception in Schizophrenia: An ERP Study“. Brain Sciences 13, Nr. 6 (19.06.2023): 970. http://dx.doi.org/10.3390/brainsci13060970.
McCotter, Maxine V., und Timothy R. Jordan. „The Role of Facial Colour and Luminance in Visual and Audiovisual Speech Perception“. Perception 32, Nr. 8 (August 2003): 921–36. http://dx.doi.org/10.1068/p3316.
Tye-Murray, Nancy, Brent P. Spehar, Joel Myerson, Sandra Hale und Mitchell S. Sommers. „The self-advantage in visual speech processing enhances audiovisual speech recognition in noise“. Psychonomic Bulletin & Review 22, Nr. 4 (25.11.2014): 1048–53. http://dx.doi.org/10.3758/s13423-014-0774-3.
Bernstein, Lynne E., Zhong-Lin Lu und Jintao Jiang. „Quantified acoustic–optical speech signal incongruity identifies cortical sites of audiovisual speech processing“. Brain Research 1242 (November 2008): 172–84. http://dx.doi.org/10.1016/j.brainres.2008.04.018.
Crosse, Michael J., und Edmund C. Lalor. „The cortical representation of the speech envelope is earlier for audiovisual speech than audio speech“. Journal of Neurophysiology 111, Nr. 7 (01.04.2014): 1400–1408. http://dx.doi.org/10.1152/jn.00690.2013.
Roa Romero, Yadira, Daniel Senkowski und Julian Keil. „Early and late beta-band power reflect audiovisual perception in the McGurk illusion“. Journal of Neurophysiology 113, Nr. 7 (April 2015): 2342–50. http://dx.doi.org/10.1152/jn.00783.2014.
Dunham, Kacie, Alisa Zoltowski, Jacob I. Feldman, Samona Davis, Baxter Rogers, Michelle D. Failla, Mark T. Wallace, Carissa J. Cascio und Tiffany G. Woynaroski. „Neural Correlates of Audiovisual Speech Processing in Autistic and Non-Autistic Youth“. Multisensory Research 36, Nr. 3 (19.01.2023): 263–88. http://dx.doi.org/10.1163/22134808-bja10093.
Vakhshiteh, Fatemeh, und Farshad Almasganj. „Exploration of Properly Combined Audiovisual Representation with the Entropy Measure in Audiovisual Speech Recognition“. Circuits, Systems, and Signal Processing 38, Nr. 6 (09.11.2018): 2523–43. http://dx.doi.org/10.1007/s00034-018-0975-5.
Lalonde, Kaylah, und Rachael Frush Holt. „Audiovisual speech integration development at varying levels of perceptual processing“. Journal of the Acoustical Society of America 136, Nr. 4 (Oktober 2014): 2263. http://dx.doi.org/10.1121/1.4900174.
Lalonde, Kaylah, und Rachael Frush Holt. „Audiovisual speech perception development at varying levels of perceptual processing“. Journal of the Acoustical Society of America 139, Nr. 4 (April 2016): 1713–23. http://dx.doi.org/10.1121/1.4945590.
Zhang, Yang, Bing Cheng, Tess Koerner, Christine Cao, Edward Carney und Yue Wang. „Cortical processing of audiovisual speech perception in infancy and adulthood“. Journal of the Acoustical Society of America 134, Nr. 5 (November 2013): 4234. http://dx.doi.org/10.1121/1.4831559.
Barrós-Loscertales, Alfonso, Noelia Ventura-Campos, Maya Visser, Agnès Alsius, Christophe Pallier, César Ávila Rivera und Salvador Soto-Faraco. „Neural correlates of audiovisual speech processing in a second language“. Brain and Language 126, Nr. 3 (September 2013): 253–62. http://dx.doi.org/10.1016/j.bandl.2013.05.009.
Loh, Marco, Gabriele Schmid, Gustavo Deco und Wolfram Ziegler. „Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model“. Journal of Cognitive Neuroscience 22, Nr. 2 (Februar 2010): 240–47. http://dx.doi.org/10.1162/jocn.2009.21202.
Tiippana, Kaisa. „Advances in Understanding the Phenomena and Processing in Audiovisual Speech Perception“. Brain Sciences 13, Nr. 9 (20.09.2023): 1345. http://dx.doi.org/10.3390/brainsci13091345.
Hällgren, Mathias, Birgitta Larsby, Björn Lyxell und Stig Arlinger. „Evaluation of a Cognitive Test Battery in Young and Elderly Normal-Hearing and Hearing-Impaired Persons“. Journal of the American Academy of Audiology 12, Nr. 07 (Juli 2001): 357–70. http://dx.doi.org/10.1055/s-0042-1745620.
Lalonde, Kaylah, und Grace A. Dwyer. „Visual phonemic knowledge and audiovisual speech-in-noise perception in school-age children“. Journal of the Acoustical Society of America 153, Nr. 3_supplement (01.03.2023): A337. http://dx.doi.org/10.1121/10.0019067.
Costa-Giomi, Eugenia. „Mode of Presentation Affects Infants’ Preferential Attention to Singing and Speech“. Music Perception 32, Nr. 2 (01.12.2014): 160–69. http://dx.doi.org/10.1525/mp.2014.32.2.160.
PONS, FERRAN, LLORENÇ ANDREU, MONICA SANZ-TORRENT, LUCÍA BUIL-LEGAZ und DAVID J. LEWKOWICZ. „Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment“. Journal of Child Language 40, Nr. 3 (09.07.2012): 687–700. http://dx.doi.org/10.1017/s0305000912000189.
Vatakis, Argiro, und Charles Spence. „Assessing audiovisual saliency and visual-information content in the articulation of consonants and vowels on audiovisual temporal perception“. Seeing and Perceiving 25 (2012): 29. http://dx.doi.org/10.1163/187847612x646514.
Paris, Tim, Jeesun Kim und Christopher Davis. „Updating expectencies about audiovisual associations in speech“. Seeing and Perceiving 25 (2012): 164. http://dx.doi.org/10.1163/187847612x647946.
Van der Burg, Erik, und Patrick T. Goodbourn. „Rapid, generalized adaptation to asynchronous audiovisual speech“. Proceedings of the Royal Society B: Biological Sciences 282, Nr. 1804 (07.04.2015): 20143083. http://dx.doi.org/10.1098/rspb.2014.3083.
Jerger, Susan, Markus F. Damian, Cassandra Karl und Hervé Abdi. „Developmental Shifts in Detection and Attention for Auditory, Visual, and Audiovisual Speech“. Journal of Speech, Language, and Hearing Research 61, Nr. 12 (10.12.2018): 3095–112. http://dx.doi.org/10.1044/2018_jslhr-h-17-0343.
Treille, Avril, Coriandre Vilain, Sonia Kandel und Marc Sato. „Electrophysiological evidence for a self-processing advantage during audiovisual speech integration“. Experimental Brain Research 235, Nr. 9 (04.07.2017): 2867–76. http://dx.doi.org/10.1007/s00221-017-5018-0.
Hueber, Thomas, Eric Tatulli, Laurent Girin und Jean-Luc Schwartz. „Evaluating the Potential Gain of Auditory and Audiovisual Speech-Predictive Coding Using Deep Learning“. Neural Computation 32, Nr. 3 (März 2020): 596–625. http://dx.doi.org/10.1162/neco_a_01264.
Gijbels, Liesbeth, Jason D. Yeatman, Kaylah Lalonde und Adrian K. C. Lee. „Audiovisual Speech Processing in Relationship to Phonological and Vocabulary Skills in First Graders“. Journal of Speech, Language, and Hearing Research 64, Nr. 12 (13.12.2021): 5022–40. http://dx.doi.org/10.1044/2021_jslhr-21-00196.
Hertrich, Ingo, Susanne Dietrich und Hermann Ackermann. „Cross-modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study“. Journal of Cognitive Neuroscience 23, Nr. 1 (Januar 2011): 221–37. http://dx.doi.org/10.1162/jocn.2010.21421.
Jansen, Samantha D., Joseph R. Keebler und Alex Chaparro. „Shifts in Maximum Audiovisual Integration with Age“. Multisensory Research 31, Nr. 3-4 (2018): 191–212. http://dx.doi.org/10.1163/22134808-00002599.
Schabus, Dietmar, Michael Pucher und Gregor Hofer. „Joint Audiovisual Hidden Semi-Markov Model-Based Speech Synthesis“. IEEE Journal of Selected Topics in Signal Processing 8, Nr. 2 (April 2014): 336–47. http://dx.doi.org/10.1109/jstsp.2013.2281036.