Literatura académica sobre el tema "Audiovisual speech processing"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Audiovisual speech processing".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Audiovisual speech processing"

1

Tsuhan Chen. "Audiovisual speech processing." IEEE Signal Processing Magazine 18, no. 1 (2001): 9–21. http://dx.doi.org/10.1109/79.911195.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Vatikiotis-Bateson, Eric, and Takaaki Kuratate. "Overview of audiovisual speech processing." Acoustical Science and Technology 33, no. 3 (2012): 135–41. http://dx.doi.org/10.1250/ast.33.135.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Francisco, Ana A., Alexandra Jesse, Margriet A. Groen, and James M. McQueen. "A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia." Journal of Speech, Language, and Hearing Research 60, no. 1 (2017): 144–58. http://dx.doi.org/10.1044/2016_jslhr-h-15-0375.

Texto completo
Resumen
Purpose Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Results Adult readers with dyslexia showed less sensitivity t
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Bernstein, Lynne E., Edward T. Auer, Michael Wagner, and Curtis W. Ponton. "Spatiotemporal dynamics of audiovisual speech processing." NeuroImage 39, no. 1 (2008): 423–35. http://dx.doi.org/10.1016/j.neuroimage.2007.08.035.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Dunham-Carr, Kacie, Jacob I. Feldman, David M. Simon, et al. "The Processing of Audiovisual Speech Is Linked with Vocabulary in Autistic and Nonautistic Children: An ERP Study." Brain Sciences 13, no. 7 (2023): 1043. http://dx.doi.org/10.3390/brainsci13071043.

Texto completo
Resumen
Explaining individual differences in vocabulary in autism is critical, as understanding and using words to communicate are key predictors of long-term outcomes for autistic individuals. Differences in audiovisual speech processing may explain variability in vocabulary in autism. The efficiency of audiovisual speech processing can be indexed via amplitude suppression, wherein the amplitude of the event-related potential (ERP) is reduced at the P2 component in response to audiovisual speech compared to auditory-only speech. This study used electroencephalography (EEG) to measure P2 amplitudes in
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sams, M. "Audiovisual Speech Perception." Perception 26, no. 1_suppl (1997): 347. http://dx.doi.org/10.1068/v970029.

Texto completo
Resumen
Persons with hearing loss use visual information from articulation to improve their speech perception. Even persons with normal hearing utilise visual information, especially when the stimulus-to-noise ratio is poor. A dramatic demonstration of the role of vision in speech perception is the audiovisual fusion called the ‘McGurk effect’. When the auditory syllable /pa/ is presented in synchrony with the face articulating the syllable /ka/, the subject usually perceives /ta/ or /ka/. The illusory perception is clearly auditory in nature. We recently studied the audiovisual fusion (acoustical /p/
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ojanen, Ville, Riikka Möttönen, Johanna Pekkola, et al. "Processing of audiovisual speech in Broca's area." NeuroImage 25, no. 2 (2005): 333–38. http://dx.doi.org/10.1016/j.neuroimage.2004.12.001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Stevenson, Ryan A., Nicholas A. Altieri, Sunah Kim, David B. Pisoni, and Thomas W. James. "Neural processing of asynchronous audiovisual speech perception." NeuroImage 49, no. 4 (2010): 3308–18. http://dx.doi.org/10.1016/j.neuroimage.2009.12.001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Hamilton, Roy H., Jeffrey T. Shenton, and H. Branch Coslett. "An acquired deficit of audiovisual speech processing." Brain and Language 98, no. 1 (2006): 66–73. http://dx.doi.org/10.1016/j.bandl.2006.02.001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Tomalski, Przemysław. "Developmental Trajectory of Audiovisual Speech Integration in Early Infancy. A Review of Studies Using the McGurk Paradigm." Psychology of Language and Communication 19, no. 2 (2015): 77–100. http://dx.doi.org/10.1515/plc-2015-0006.

Texto completo
Resumen
Abstract Apart from their remarkable phonological skills young infants prior to their first birthday show ability to match the mouth articulation they see with the speech sounds they hear. They are able to detect the audiovisual conflict of speech and to selectively attend to articulating mouth depending on audiovisual congruency. Early audiovisual speech processing is an important aspect of language development, related not only to phonological knowledge, but also to language production during subsequent years. Th is article reviews recent experimental work delineating the complex development
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "Audiovisual speech processing"

1

Morís, Fernández Luis 1982. "Audiovisual speech processing: the role of attention and conflict." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/385348.

Texto completo
Resumen
Events in our environment do rarely excite only one sensory pathway, but usually involve several modalities offering complimentary information. These different informations are usually integrated into a single percept through the process of multisensory integration. The present dissertation addresses how and under what circumstances this multisensory integration process occurs in the context of audiovisual speech. The findings of this dissertation challenge previous views of audiovisual integration in speech as a low level automatic process by providing evidence, first, of the influence of the
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Copeland, Laura. "Audiovisual processing of affective and linguistic prosody : an event-related fMRI study." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=111605.

Texto completo
Resumen
This study was designed to clarify some of the issues surrounding the nature of hemispheric contributions to the processing of emotional and linguistic prosody, as well as to examine the relative contribution of different sensory modalities in processing prosodic structures. Ten healthy young participants were presented with semantically neutral sentences expressing affective or linguistic prosody solely through the use of non-verbal cues (intonation, facial expressions) while undergoing tMRI. The sentences were presented under auditory, visual, as well as audio-visual conditions. The emotiona
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Krause, Hanna [Verfasser], and Andreas K. [Akademischer Betreuer] Engel. "Audiovisual processing in Schizophrenia : neural responses in audiovisual speech interference and semantic priming / Hanna Krause. Betreuer: Andreas K. Engel." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2015. http://d-nb.info/1075858569/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Sadok, Samir. "Audiovisual speech representation learning applied to emotion recognition." Electronic Thesis or Diss., CentraleSupélec, 2024. http://www.theses.fr/2024CSUP0003.

Texto completo
Resumen
Les émotions sont vitales dans notre quotidien, devenant un centre d'intérêt majeur de la recherche en cours. La reconnaissance automatique des émotions a suscité beaucoup d'attention en raison de ses applications étendues dans des secteurs tels que la santé, l'éducation, le divertissement et le marketing. Ce progrès dans la reconnaissance émotionnelle est essentiel pour favoriser le développement de l'intelligence artificielle centrée sur l'humain. Les systèmes de reconnaissance des émotions supervisés se sont considérablement améliorés par rapport aux approches traditionnelles d’apprentissag
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Biau, Emmanuel 1985. "Beat gestures and speech processing: when prosody extends to the speaker's hands." Doctoral thesis, Universitat Pompeu Fabra, 2015. http://hdl.handle.net/10803/325429.

Texto completo
Resumen
Speakers naturally accompany their speech with hand gestures and extend the auditory prosody to visual modality through rapid beat gestures that help them to structure their narrative and emphasize relevant information. The present thesis aimed to investigate beat gestures and their neural correlates on the listener’s side. We developed a naturalistic approach combining political discourse presentations with neuroimaging techniques (ERPs, EEG and fMRI) and behavioral measures. The main findings of the thesis first revealed that beat-speech processing engaged language-related areas, suggesting
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Blomberg, Rina. "CORTICAL PHASE SYNCHRONISATION MEDIATES NATURAL FACE-SPEECH PERCEPTION." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-122825.

Texto completo
Resumen
It is a challenging task for researchers to determine how the brain solves multisensory perception, and the neural mechanisms involved remain subject to theoretical conjecture.  According to a hypothesised cortical model for natural audiovisual stimulation, phase synchronised communications between participating brain regions play a mechanistic role in natural audiovisual perception.  The purpose of this study was to test the hypothesis by investigating oscillatory dynamics from ongoing EEG recordings whilst participants passively viewed ecologically realistic face-speech interactions in film.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Girin, Laurent. "Débruitage de parole par un filtrage utilisant l'image du locuteur." Grenoble INPG, 1997. http://www.theses.fr/1997INPG0207.

Texto completo
Resumen
Un probleme majeur pour les systemes de telecommunications est celui du debruitage de parole, c'est-a-dire l'attenuation des effets d'un bruit parasite en vue d'ameliorer l'intelligibilite et la qualite du message. Or, l'homme possede en ce domaine une competence particuliere, celle de pouvoir extraire l'information auditive grace aux signaux captes visuellement sur le visage de l'interlocuteur. Autrement dit, l'homme sait utiliser la bimodalite auditive et visuelle de la parole pour la rehausser. L'utilisation automatisee des informations visuelles a deja permis d'ameliorer la robustesse des
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Teissier, Pascal. "Fusion de capteurs avec contrôle du contexte : application a la reconnaissance de parole dans le bruit." Grenoble INPG, 1999. http://www.theses.fr/1999INPG0023.

Texto completo
Resumen
Cette these est consacree a la fusion de capteurs incluant un controle par des informations contextuelles. L'application visee est la reconnaissance audiovisuelle de parole dans le bruit. Tout d'abord, nous passons en revue la litterature sur les systemes de reconnaissance automatique de la parole audiovisuelle existants sans oublier les domaines plus generaux comme la fusion de capteur et la perception de la parole. De cette revue de l'etat de l'art qui laisse apparaitre une considerable diversite d'approches, nous mettons en place une strategie et une methodologie permettant d'etudier et de
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Decroix, François-Xavier. "Apprentissage en ligne de signatures audiovisuelles pour la reconnaissance et le suivi de personnes au sein d'un réseau de capteurs ambiants." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30298/document.

Texto completo
Resumen
L'opération neOCampus, initiée en 2013 par l'Université Paul Sabatier, a pour objectif de créer un campus connecté, innovant, intelligent et durable en exploitant les compétences de 11 laboratoires et de plusieurs partenaires industriels. Pluridisciplinaires, ces compétences sont croisées dans le but d'améliorer le confort au quotidien des usagers du campus (étudiants, corps enseignant, personnel administratif) et de diminuer son empreinte écologique. L'intelligence que nous souhaitons apporter au Campus du futur exige de fournir à ses bâtiments une perception de son activité interne. En effet
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Robert-Ribes, Jordi. "Modèles d'intégration audiovisuelle de signaux linguistiques : de la perception humaine a la reconnaissance automatique des voyelles." Grenoble INPG, 1995. http://www.theses.fr/1995INPG0032.

Texto completo
Resumen
Cette these concerne l'etude des modeles d'integration des informations auditives et visuelles en vue d'obtenir un modele plausible et fonctionnel pour la reconnaissance audiovisuelle des voyelles du francais. Nous passerons en revue les donnees de la litterature sur l'integration audiovisuelle en perception de parole. Nous presenterons ensuite quatre modeles et nous les classifierons selon des principes inspires a la fois de la psychologie experimentale et de la litterature sur la fusion de capteurs. Notre contrainte de plausibilite (conformite aux donnees experimentales) permettra d'eliminer
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Audiovisual speech processing"

1

Bailly, Gerard, Pascal Perrier, and Eric Vatikiotis-Bateson, eds. Audiovisual Speech Processing. Cambridge University Press, 2012. http://dx.doi.org/10.1017/cbo9780511843891.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Randazzo, Melissa. Audiovisual Integration in Apraxia of Speech: EEG Evidence for Processing Differences. [publisher not identified], 2016.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Audiovisual speech processing. Cambridge University Press, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Vatikiotis-Bateson, Eric, Pascal Perrier, and Gérard Bailly. Audiovisual Speech Processing. Cambridge University Press, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Vatikiotis-Bateson, Eric, Pascal Perrier, and Gérard Bailly. Audiovisual Speech Processing. Cambridge University Press, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Vatikiotis-Bateson, Eric, Pascal Perrier, and Gérard Bailly. Audiovisual Speech Processing. Cambridge University Press, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Vatikiotis-Bateson, Eric, Pascal Perrier, and Gérard Bailly. Audiovisual Speech Processing. Cambridge University Press, 2015.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Vatikiotis-Bateson, Eric, Pascal Perrier, and Gérard Bailly. Audiovisual Speech Processing. Cambridge University Press, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Abel, Andrew, and Amir Hussain. Cognitively Inspired Audiovisual Speech Filtering: Towards an Intelligent, Fuzzy Based, Multimodal, Two-Stage Speech Enhancement System. Springer, 2015.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Abel, Andrew, and Amir Hussain. Cognitively Inspired Audiovisual Speech Filtering: Towards an Intelligent, Fuzzy Based, Multimodal, Two-Stage Speech Enhancement System. Springer International Publishing AG, 2015.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Capítulos de libros sobre el tema "Audiovisual speech processing"

1

Riekhakaynen, Elena, and Elena Zatevalova. "Should We Believe Our Eyes or Our Ears? Processing Incongruent Audiovisual Stimuli by Russian Listeners." In Speech and Computer. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-20980-2_51.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Aleksic, Petar S., Gerasminos Potamianos, and Aggelos K. Katsaggelos. "Audiovisual Speech Processing." In The Essential Guide to Video Processing. Elsevier, 2009. http://dx.doi.org/10.1016/b978-0-12-374456-2.00024-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Pantic, Maja. "Face for Interface." In Encyclopedia of Multimedia Technology and Networking, Second Edition. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch075.

Texto completo
Resumen
The human face is involved in an impressive variety of different activities. It houses the majority of our sensory apparatus: eyes, ears, mouth, and nose, allowing the bearer to see, hear, taste, and smell. Apart from these biological functions, the human face provides a number of signals essential for interpersonal communication in our social life. The face houses the speech production apparatus and is used to identify other members of the species, to regulate the conversation by gazing or nodding, and to interpret what has been said by lip reading. It is our direct and naturally preeminent m
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Audiovisual speech processing"

1

Sankar, Sanjana, Martin Lenglet, Gérard Bailly, Denis Beautemps, and Thomas Hueber. "Cued Speech Generation Leveraging a Pre-trained Audiovisual Text-to-Speech Model." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10888365.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Clarke, Jason, Yoshihiko Gotoh, and Stefan Goetze. "Speaker Embedding Informed Audiovisual Active Speaker Detection for Egocentric Recordings." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10890414.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Han, Fangzhou, Tianyi Yu, Lamei Zhang, Lingyu Si, and Yiqi Zhang. "SlotFusion: Object-Centric Audiovisual Feature Fusion with Slot Attention for Remote Sensing Scene Recognition." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10888715.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Vatikiotis-Bateson, E., K. G. Munhall, Y. Kasahara, F. Garcia, and H. Yehia. "Characterizing audiovisual information during speech." In 4th International Conference on Spoken Language Processing (ICSLP 1996). ISCA, 1996. http://dx.doi.org/10.21437/icslp.1996-379.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Petridis, Stavros, and Maja Pantic. "Audiovisual discrimination between laughter and speech." In ICASSP 2008 - 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/icassp.2008.4518810.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Petridis, Stavros, Themos Stafylakis, Pingehuan Ma, Feipeng Cai, Georgios Tzimiropoulos, and Maja Pantic. "End-to-End Audiovisual Speech Recognition." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461326.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Tran, Tam, Soroosh Mariooryad, and Carlos Busso. "Audiovisual corpus to analyze whisper speech." In ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6639243.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Katsamanis, Athanassios, George Papandreou, and Petros Maragos. "Audiovisual-to-Articulatory Speech Inversion Using HMMs." In 2007 IEEE 9th Workshop on Multimedia Signal Processing. IEEE, 2007. http://dx.doi.org/10.1109/mmsp.2007.4412915.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Rosenblum, Lawrence D. "The perceptual basis for audiovisual speech integration." In 7th International Conference on Spoken Language Processing (ICSLP 2002). ISCA, 2002. http://dx.doi.org/10.21437/icslp.2002-424.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Silva, Samuel, and António Teixeira. "An Anthropomorphic Perspective for Audiovisual Speech Synthesis." In 10th International Conference on Bio-inspired Systems and Signal Processing. SCITEPRESS - Science and Technology Publications, 2017. http://dx.doi.org/10.5220/0006150201630172.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!