Segui questo link per vedere altri tipi di pubblicazioni sul tema: Lipreading.

Articoli di riviste sul tema "Lipreading"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Lipreading".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Lynch, Michael P., Rebecca E. Eilers, D. Kimbrough Oller, Richard C. Urbano e Patricia J. Pero. "Multisensory Narrative Tracking by a Profoundly Deaf Subject Using an Electrocutaneous Vocoder and a Vibrotactile Aid". Journal of Speech, Language, and Hearing Research 32, n. 2 (giugno 1989): 331–38. http://dx.doi.org/10.1044/jshr.3202.331.

Testo completo
Abstract (sommario):
A congenitally, profoundly deaf adult who had received 41 hours of tactual word recognition training in a previous study was assessed in tracking of connected discourse. This assessment was conducted in three phases. In the first phase, the subject used the Tacticon 1600 electrocutaneous vocoder to track a narrative in three conditions: (a) lipreading and aided hearing (L+H), (b) lipreading and tactual vocoder (L+TV), and (c) lipreading, tactual vocoder, and aided hearing (L+TV+H), Subject performance was significantly better in the L+TV+H condition than in the L+H condition, suggesting that the subject benefitted from the additional information provided by the tactual vocoder. In the second phase, the Tactaid II vibrotactile aid was used in three conditions: (a) lipreading alone, (b) lipreading and tactual aid (L+TA), and (c) lipreading, tactual aid, and aided hearing (L+TA+H). The subject was able to combine cues from the Tactaid II with those from lipreading and aided hearing. In the third phase, both tactual devices were used in six conditions: (a) lipreading alone (L), (b) lipreading and aided hearing (L+H), (c) lipreading and Tactaid II (L+TA), (d) lipreading and Tacticon 1600 (L+TV), (e) lipreading, Tactaid II, and aided hearing (L+TA+H), and (f) lipreading, Tacticon 1600, and aided hearing (L+TV+H). In this phase, only the Tactaid II significantly improved tracking performance over lipreading and aided hearing. Overall, improvement in tracking performance occurred within and across phases of this study.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Tye-Murray, Nancy, Sandra Hale, Brent Spehar, Joel Myerson e Mitchell S. Sommers. "Lipreading in School-Age Children: The Roles of Age, Hearing Status, and Cognitive Ability". Journal of Speech, Language, and Hearing Research 57, n. 2 (aprile 2014): 556–65. http://dx.doi.org/10.1044/2013_jslhr-h-12-0273.

Testo completo
Abstract (sommario):
Purpose The study addressed three research questions: Does lipreading improve between the ages of 7 and 14 years? Does hearing loss affect the development of lipreading? How do individual differences in lipreading relate to other abilities? Method Forty children with normal hearing (NH) and 24 with hearing loss (HL) were tested using 4 lipreading instruments plus measures of perceptual, cognitive, and linguistic abilities. Results For both groups, lipreading performance improved with age on all 4 measures of lipreading, with the HL group performing better than the NH group. Scores from the 4 measures loaded strongly on a single principal component. Only age, hearing status, and visuospatial working memory were significant predictors of lipreading performance. Conclusions Results showed that children's lipreading ability is not fixed but rather improves between 7 and 14 years of age. The finding that children with HL lipread better than those with NH suggests experience plays an important role in the development of this ability. In addition to age and hearing status, visuospatial working memory predicts lipreading performance in children, just as it does in adults. Future research on the developmental time-course of lipreading could permit interventions and pedagogies to be targeted at periods in which improvement is most likely to occur.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Hawes, Nancy A. "Lipreading for Children: A Synthetic Approach to Lipreading". Ear and Hearing 9, n. 6 (dicembre 1988): 356. http://dx.doi.org/10.1097/00003446-198812000-00018.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Paulesu, E., D. Perani, V. Blasi, G. Silani, N. A. Borghese, U. De Giovanni, S. Sensolo e F. Fazio. "A Functional-Anatomical Model for Lipreading". Journal of Neurophysiology 90, n. 3 (settembre 2003): 2005–13. http://dx.doi.org/10.1152/jn.00926.2002.

Testo completo
Abstract (sommario):
Regional cerebral blood flow (rCBF) PET scans were used to study the physiological bases of lipreading, a natural skill of extracting language from mouth movements, which contributes to speech perception in everyday life. Viewing connected mouth movements that could not be lexically identified and that evoke perception of isolated speech sounds (nonlexical lipreading) was associated with bilateral activation of the auditory association cortex around Wernicke's area, of left dorsal premotor cortex, and left opercular-premotor division of the left inferior frontal gyrus (Broca's area). The supplementary motor area was active as well. These areas have all been implicated in phonological processing, speech and mouth motor planning, and execution. In addition, nonlexical lipreading also differentially activated visual motion areas. Lexical access through lipreading was associated with a similar pattern of activation and with additional foci in ventral- and dorsolateral prefrontal cortex bilaterally and in left inferior parietal cortex. Linear regression analysis of cerebral blood flow and proficiency for lexical lipreading further clarified the role of these areas in gaining access to language through lipreading. The results suggest cortical activation circuits for lipreading from action representations that may differentiate lexical access from nonlexical processes.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Heikkilä, Jenni, Eila Lonka, Sanna Ahola, Auli Meronen e Kaisa Tiippana. "Lipreading Ability and Its Cognitive Correlates in Typically Developing Children and Children With Specific Language Impairment". Journal of Speech, Language, and Hearing Research 60, n. 3 (marzo 2017): 485–93. http://dx.doi.org/10.1044/2016_jslhr-s-15-0071.

Testo completo
Abstract (sommario):
PurposeLipreading and its cognitive correlates were studied in school-age children with typical language development and delayed language development due to specific language impairment (SLI).MethodForty-two children with typical language development and 20 children with SLI were tested by using a word-level lipreading test and an extensive battery of standardized cognitive and linguistic tests.ResultsChildren with SLI were poorer lipreaders than their typically developing peers. Good phonological skills were associated with skilled lipreading in both typically developing children and in children with SLI. Lipreading was also found to correlate with several cognitive skills, for example, short-term memory capacity and verbal motor skills.ConclusionsSpeech processing deficits in SLI extend also to the perception of visual speech. Lipreading performance was associated with phonological skills. Poor lipreading in children with SLI may be, thus, related to problems in phonological processing.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Ortiz, Isabel de los Reyes Rodríguez. "Lipreading in the Prelingually Deaf: What makes a Skilled Speechreader?" Spanish Journal of Psychology 11, n. 2 (novembre 2008): 488–502. http://dx.doi.org/10.1017/s1138741600004492.

Testo completo
Abstract (sommario):
Lipreading proficiency was investigated in a group of hearing-impaired people, all of them knowing Spanish Sign Language (SSL). The aim of this study was to establish the relationships between lipreading and some other variables (gender, intelligence, audiological variables, participants' education, parents' education, communication practices, intelligibility, use of SSL). The 32 participants were between 14 and 47 years of age. They all had sensorineural hearing losses (from severe to profound). The lipreading procedures comprised identification of words in isolation. The words selected for presentation in isolation were spoken by the same talker. Identification of words required participants to select their responses from set of four pictures appropriately labelled. Lipreading was significantly correlated with intelligence and intelligibility. Multiple regression analyses were used to obtain a prediction equation for the lipreading measures. As a result of this procedure, it is concluded that proficient deaf lipreaders are more intelligent and their oral speech was more comprehensible for others.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Plant, Geoff, Johan Gnosspelius e Harry Levitt. "The Use of Tactile Supplements in Lipreading Swedish and English". Journal of Speech, Language, and Hearing Research 43, n. 1 (febbraio 2000): 172–83. http://dx.doi.org/10.1044/jslhr.4301.172.

Testo completo
Abstract (sommario):
The speech perception skills of GS, a Swedish adult deaf man who has used a "natural" tactile supplement to lipreading for over 45 years, were tested in two languages: Swedish and English. Two different tactile supplements to lipreading were investigated. In the first, "Tactiling," GS detected the vibrations accompanying speech by placing his thumb directly on the speaker’s throat. In the second, a simple tactile aid consisting of a throat microphone, amplifier, and a hand-held bone vibrator was used. Both supplements led to improved lipreading of materials ranging in complexity from consonants in [aCa] nonsense syllables to Speech Tracking. Analysis of GS’s results indicated that the tactile signal assisted him in identifying vowel duration, consonant voicing, and some manner of articulation categories. GS’s tracking rate in Swedish was around 40 words per minute when the materials were presented via lipreading alone. When the lipreading signal was supplemented by tactile cues, his tracking rates were in the range of 60–65 words per minute. Although GS’s tracking rates for English materials were around half those achieved in Swedish, his performance showed a similar pattern in that the use of tactile cues led to improvements of around 40% over lipreading alone.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Suess, Nina, Anne Hauswald, Verena Zehentner, Jessica Depireux, Gudrun Herzog, Sebastian Rösch e Nathan Weisz. "Influence of linguistic properties and hearing impairment on visual speech perception skills in the German language". PLOS ONE 17, n. 9 (30 settembre 2022): e0275585. http://dx.doi.org/10.1371/journal.pone.0275585.

Testo completo
Abstract (sommario):
Visual input is crucial for understanding speech under noisy conditions, but there are hardly any tools to assess the individual ability to lipread. With this study, we wanted to (1) investigate how linguistic characteristics of language on the one hand and hearing impairment on the other hand have an impact on lipreading abilities and (2) provide a tool to assess lipreading abilities for German speakers. 170 participants (22 prelingually deaf) completed the online assessment, which consisted of a subjective hearing impairment scale and silent videos in which different item categories (numbers, words, and sentences) were spoken. The task for our participants was to recognize the spoken stimuli just by visual inspection. We used different versions of one test and investigated the impact of item categories, word frequency in the spoken language, articulation, sentence frequency in the spoken language, sentence length, and differences between speakers on the recognition score. We found an effect of item categories, articulation, sentence frequency, and sentence length on the recognition score. With respect to hearing impairment we found that higher subjective hearing impairment is associated with higher test score. We did not find any evidence that prelingually deaf individuals show enhanced lipreading skills over people with postlingual acquired hearing impairment. However, we see an interaction with education only in the prelingual deaf, but not in the population with postlingual acquired hearing loss. This points to the fact that there are different factors contributing to enhanced lipreading abilities depending on the onset of hearing impairment (prelingual vs. postlingual). Overall, lipreading skills vary strongly in the general population independent of hearing impairment. Based on our findings we constructed a new and efficient lipreading assessment tool (SaLT) that can be used to test behavioral lipreading abilities in the German speaking population.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Zhang, Tao, Lun He, Xudong Li e Guoqing Feng. "Efficient End-to-End Sentence-Level Lipreading with Temporal Convolutional Networks". Applied Sciences 11, n. 15 (29 luglio 2021): 6975. http://dx.doi.org/10.3390/app11156975.

Testo completo
Abstract (sommario):
Lipreading aims to recognize sentences being spoken by a talking face. In recent years, the lipreading method has achieved a high level of accuracy on large datasets and made breakthrough progress. However, lipreading is still far from being solved, and existing methods tend to have high error rates on the wild data and have the defects of disappearing training gradient and slow convergence. To overcome these problems, we proposed an efficient end-to-end sentence-level lipreading model, using an encoder based on a 3D convolutional network, ResNet50, Temporal Convolutional Network (TCN), and a CTC objective function as the decoder. More importantly, the proposed architecture incorporates TCN as a feature learner to decode feature. It can partly eliminate the defects of RNN (LSTM, GRU) gradient disappearance and insufficient performance, and this yields notable performance improvement as well as faster convergence. Experiments show that the training and convergence speed are 50% faster than the state-of-the-art method, and improved accuracy by 2.4% on the GRID dataset.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Kumar, Yaman, Rohit Jain, Khwaja Mohd Salik, Rajiv Ratn Shah, Yifang Yin e Roger Zimmermann. "Lipper: Synthesizing Thy Speech Using Multi-View Lipreading". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 2588–95. http://dx.doi.org/10.1609/aaai.v33i01.33012588.

Testo completo
Abstract (sommario):
Lipreading has a lot of potential applications such as in the domain of surveillance and video conferencing. Despite this, most of the work in building lipreading systems has been limited to classifying silent videos into classes representing text phrases. However, there are multiple problems associated with making lipreading a text-based classification task like its dependence on a particular language and vocabulary mapping. Thus, in this paper we propose a multi-view lipreading to audio system, namely Lipper, which models it as a regression task. The model takes silent videos as input and produces speech as the output. With multi-view silent videos, we observe an improvement over single-view speech reconstruction results. We show this by presenting an exhaustive set of experiments for speaker-dependent, out-of-vocabulary and speaker-independent settings. Further, we compare the delay values of Lipper with other speechreading systems in order to show the real-time nature of audio produced. We also perform a user study for the audios produced in order to understand the level of comprehensibility of audios produced using Lipper.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Muljono, Muljono, Galuh Wilujeng Saraswati, Nurul Anisa Sri Winarsih, Nur Rokhman, Catur Supriyanto e Pujiono Pujiono. "Developing BacaBicara: An Indonesian Lipreading System as an Independent Communication Learning for the Deaf and Hard-of-Hearing". International Journal of Emerging Technologies in Learning (iJET) 14, n. 04 (27 febbraio 2019): 44. http://dx.doi.org/10.3991/ijet.v14i04.9578.

Testo completo
Abstract (sommario):
Deaf and hard-of-hearing people have limitations in communication, espe-cially on aspects of language, intelligence, and social adjustment. To com-municate, deaf people use sign language or lipreading. For normal people, it is very difficult to use sign language. They have to memorize many hand signs. Therefore, lipreading is a necessary for communication between nor-mal and deaf people. In Indonesia, there is still few education media for deaf people to learn lipreading. To overcome this challenge, we develop a lipread-ing educational media to help deaf and hard-of-hearing to learn Bahasa In-donesia, called BacaBicara. User-Centered Design (UCD) is implemented to design the application and to analyze the constraints and conceptual models for the needs of users. This conceptual model uses the picture, lipreading video, text, and sign language to help the users understand the contents. The High fidelity prototype was implemented for evaluating usability testing. Based on the evaluation of the application, the results show that the proto-type matches the usability goals and the user experience.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Johnson, Fern M., Leslie H. Hicks, Terry Goldberg e Michael S. Myslobodsky. "Sex differences in lipreading". Bulletin of the Psychonomic Society 26, n. 2 (agosto 1988): 106–8. http://dx.doi.org/10.3758/bf03334875.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Updike, Claudia D., Joanne M. Rasmussen, Roberta Arndt e Cathy German. "Revised Craig Lipreading Inventory". Perceptual and Motor Skills 74, n. 1 (febbraio 1992): 267–77. http://dx.doi.org/10.2466/pms.1992.74.1.267.

Testo completo
Abstract (sommario):
The two purposes of this study were to shorten the Craig Lipreading Inventory without affecting its reliability and validity and to establish normative data on the revised version. The full inventory was administered to 75 children. By item analysis, half of the items were selected to comprise the brief version; both versions were administered to another group of 75 children. Scores on the two versions correlated (.91 and .92, respectively, for Word Forms A and B and .97 and .95, respectively, for Sentence Forms A and B), thereby substantiating the construct validity of the briefer version. There was significantly high intertest reliability for the Word Forms (.80) and Sentence Forms (.82) of the briefer inventory. Normative data were computed for each age group. This briefer version is a temporally efficient tool for evaluating lipreading ability of children.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Ebrahimi, D., e H. Kunov. "Peripheral vision lipreading aid". IEEE Transactions on Biomedical Engineering 38, n. 10 (1991): 944–52. http://dx.doi.org/10.1109/10.88440.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

CAMPBELL, RUTH, THEODOR LANDIS e MARIANNE REGARD. "FACE RECOGNITION AND LIPREADING". Brain 109, n. 3 (1986): 509–21. http://dx.doi.org/10.1093/brain/109.3.509.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

SAMUELSSON, STEFAN, e JERKER RÖNNBERG. "Script activation in lipreading". Scandinavian Journal of Psychology 32, n. 2 (giugno 1991): 124–43. http://dx.doi.org/10.1111/j.1467-9450.1991.tb00863.x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Wiss, Rosemary. "Lipreading: Remembering Saartjie Baartman". Australian Journal of Anthropology 5, n. 3 (maggio 1994): 11–40. http://dx.doi.org/10.1111/j.1835-9310.1994.tb00323.x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Chiou, G. I., e Jenq-Neng Hwang. "Lipreading from color video". IEEE Transactions on Image Processing 6, n. 8 (agosto 1997): 1192–95. http://dx.doi.org/10.1109/83.605417.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Bernstein, Lynne E., Marilyn E. Demorest, Michael P. O'Connell e David C. Coulter. "Lipreading with vibrotactile vocoders". Journal of the Acoustical Society of America 87, S1 (maggio 1990): S124—S125. http://dx.doi.org/10.1121/1.2027907.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Yu, Xuhu, Zhong Wan, Zehao Shi e Lei Wang. "Lipreading Using Liquid State Machine with STDP-Tuning". Applied Sciences 12, n. 20 (17 ottobre 2022): 10484. http://dx.doi.org/10.3390/app122010484.

Testo completo
Abstract (sommario):
Lipreading refers to the task of decoding the text content of a speaker based on visual information about the movement of the speaker’s lips. With the development of deep learning in recent years, lipreading has attracted extensive research. However, the deep learning method requires a lot of computing resources, which is not conducive to the migration of the system to edge devices. Inspired by the work of Spiking Neural Networks (SNNs) in recognizing human actions and gestures, we propose a lipreading system based on SNNs. Specifically, we construct the front-end feature extractor of the system using Liquid State Machine (LSM). On the other hand, a heuristic algorithm is used to select appropriate parameters for the classifier in the backend. On small-scale lipreading datasets, our recognition accuracy achieves good results. We claim that our network performs better in terms of accuracy and ratio of learned parameters compared to other networks, and has superior advantages in terms of network complexity and training cost. On the AVLetters dataset, our model achieves a 5% improvement in accuracy over traditional methods and a 90% reduction in parameters over the state-of-the-art.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Brown, A. M., R. C. Dowell e G. M. Clark. "Clinical Results for Postlingually Deaf Patients Implanted with Multichannel Cochlear Prostheses". Annals of Otology, Rhinology & Laryngology 96, n. 1_suppl (gennaio 1987): 127–28. http://dx.doi.org/10.1177/00034894870960s168.

Testo completo
Abstract (sommario):
Clinical results for 24 patients using the Nucleus 22-channel cochlear prosthesis have shown the device to be successful in presenting amplitude, fundamental frequency, and second formant information to patients with acquired hearing loss. For all patients, this has meant a significant improvement in their communication ability when using lipreading and some ability to understand unknown speech without lipreading or contextual cues. Approximately 40% of patients are able to understand running speech in a limited fashion without lipreading, and this ability has been evaluated using the speech-tracking technique for a number of patients. Many patients are able to have limited conversations on the telephone without using a special code. Although the prosthesis has been designed with the presentation of speech signals in mind, recognition and discrimination of environmental sounds has also been very encouraging with patients scoring 70% to 80% correct for closed set environmental sound testing. Follow-up testing has indicated that the ability to understand open set speech without lipreading continues to improve up to at least 12 months postoperatively. Open set sentence test results improved from an average of 20% at 3 months to 40% at 12 months.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Strand, Julia, Allison Cooperman, Jonathon Rowe e Andrea Simenstad. "Individual Differences in Susceptibility to the McGurk Effect: Links With Lipreading and Detecting Audiovisual Incongruity". Journal of Speech, Language, and Hearing Research 57, n. 6 (dicembre 2014): 2322–31. http://dx.doi.org/10.1044/2014_jslhr-h-14-0059.

Testo completo
Abstract (sommario):
Purpose Prior studies (e.g., Nath & Beauchamp, 2012) report large individual variability in the extent to which participants are susceptible to the McGurk effect, a prominent audiovisual (AV) speech illusion. The current study evaluated whether susceptibility to the McGurk effect (MGS) is related to lipreading skill and whether multiple measures of MGS that have been used previously are correlated. In addition, it evaluated the test–retest reliability of individual differences in MGS. Method Seventy-three college-age participants completed 2 tasks measuring MGS and 3 measures of lipreading skill. Fifty-eight participants returned for a 2nd session (approximately 2 months later) in which MGS was tested again. Results The current study demonstrated that MGS shows high test–retest reliability and is correlated with some measures of lipreading skill. In addition, susceptibility measures derived from identification tasks were moderately related to the ability to detect instances of AV incongruity. Conclusions Although MGS is often cited as a demonstration of AV integration, the results suggest that perceiving the illusion depends in part on individual differences in lipreading skill and detecting AV incongruity. Therefore, individual differences in susceptibility to the illusion are not solely attributable to individual differences in AV integration ability.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Li, Hao, Nurbiya Yadikar, Yali Zhu, Mutallip Mamut e Kurban Ubul. "Learning the Relative Dynamic Features for Word-Level Lipreading". Sensors 22, n. 10 (13 maggio 2022): 3732. http://dx.doi.org/10.3390/s22103732.

Testo completo
Abstract (sommario):
Lipreading is a technique for analyzing sequences of lip movements and then recognizing the speech content of a speaker. Limited by the structure of our vocal organs, the number of pronunciations we could make is finite, leading to problems with homophones when speaking. On the other hand, different speakers will have various lip movements for the same word. For these problems, we focused on the spatial–temporal feature extraction in word-level lipreading in this paper, and an efficient two-stream model was proposed to learn the relative dynamic information of lip motion. In this model, two different channel capacity CNN streams are used to extract static features in a single frame and dynamic information between multi-frame sequences, respectively. We explored a more effective convolution structure for each component in the front-end model and improved by about 8%. Then, according to the characteristics of the word-level lipreading dataset, we further studied the impact of the two sampling methods on the fast and slow channels. Furthermore, we discussed the influence of the fusion methods of the front-end and back-end models under the two-stream network structure. Finally, we evaluated the proposed model on two large-scale lipreading datasets and achieved a new state-of-the-art.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Jishnu T S e Anju Antony. "LipNet: End-to-End Lipreading". Indian Journal of Data Mining 4, n. 1 (30 maggio 2024): 1–4. http://dx.doi.org/10.54105/ijdm.a1632.04010524.

Testo completo
Abstract (sommario):
Lipreading is the task of decoding text from the movement of a speaker’s mouth. This research presents the development of an advanced end-to-end lipreading system. Leveraging deep learning architectures and multimodal fusion techniques, the proposed system interprets spoken language solely from visual cues, such as lip movements. Through meticulous data collection, annotation, preprocessing, model development, and evaluation, diverse datasets encompassing various speakers, accents, languages, and environmental conditions are curated to ensure robustness and generalization. Conventional methods divided the task into two phases: prediction and designing or learning visual characteristics. Most deep lipreading methods are trainable from end to end. In the past, lipreading has been tackled using tedious and sometimes unsatisfactory techniques that break down speech into smaller units like phonemes or visemes. But these methods often fail when faced with real-world problems, such contextual factors, accents, and differences in speech patterns. Nevertheless, current research on end-to-end trained models only carries out word classification; sentence-level sequence prediction is not included. LipNet is an end-to-end trained model that uses spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss to translate a variable-length sequence of video frames to text. LipNet breaks from this traditional paradigm by using an all-encompassing, end-to-end approach supported by deep learning algorithms, Convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are skilled at processing sequential data and extracting high-level representations, are fundamental to LipNet's architecture.LipNet achieves 95.2% accuracy in sentence-level on the GRID corpus, overlapped speaker split task, outperforming experienced human lipreaders and the previous 86.4% word-level state-of-the-art accuracy.The results underscore the transformative potential of the lipreading system in real-world applications, particularly in domains such as assistive technology and human-computer interaction, where it can significantly improve communication accessibility and inclusivity for individuals with hearing impairments.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Vannuscorps, Gilles, Michael Andres, Sarah Pereira Carneiro, Elise Rombaux e Alfonso Caramazza. "Typically Efficient Lipreading without Motor Simulation". Journal of Cognitive Neuroscience 33, n. 4 (aprile 2021): 611–21. http://dx.doi.org/10.1162/jocn_a_01666.

Testo completo
Abstract (sommario):
All it takes is a face-to-face conversation in a noisy environment to realize that viewing a speaker's lip movements contributes to speech comprehension. What are the processes underlying the perception and interpretation of visual speech? Brain areas that control speech production are also recruited during lipreading. This finding raises the possibility that lipreading may be supported, at least to some extent, by a covert unconscious imitation of the observed speech movements in the observer's own speech motor system—a motor simulation. However, whether, and if so to what extent, motor simulation contributes to visual speech interpretation remains unclear. In two experiments, we found that several participants with congenital facial paralysis were as good at lipreading as the control population and performed these tasks in a way that is qualitatively similar to the controls despite severely reduced or even completely absent lip motor representations. Although it remains an open question whether this conclusion generalizes to other experimental conditions and to typically developed participants, these findings considerably narrow the space of hypothesis for a role of motor simulation in lipreading. Beyond its theoretical significance in the field of speech perception, this finding also calls for a re-examination of the more general hypothesis that motor simulation underlies action perception and interpretation developed in the frameworks of motor simulation and mirror neuron hypotheses.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Rosen, Stuart, John Walliker, Judith A. Brimacombe e Bradly J. Edgerton. "Prosodic and Segmental Aspects of Speech Perception with the House/3M Single-Channel Implant". Journal of Speech, Language, and Hearing Research 32, n. 1 (marzo 1989): 93–111. http://dx.doi.org/10.1044/jshr.3201.93.

Testo completo
Abstract (sommario):
Four adult users of the House/3M single-channel cochlear implant were tested for their ability to label question and statement intonation contours (by auditory means alone) and to identify a set of 12 intervocalic consonants (with and without lipreading). Nineteen of 20 scores obtained on the question/statement task were significantly better than chance. Simplifying the stimulating waveform so as to signal fundamental frequency alone sometimes led to an improvement in performance. In consonant identification, lipreading alone scores were always far inferior to those obtained by lipreading with the implant. Phonetic feature analyses showed that the major effect of using the implant was to increase the transmission of voicing information, although improvements in the appropriate labelling of manner distinctions were also found. Place of articulation was poorly identified from the auditory signal alone. These results are best explained by supposing that subjects can use the relatively gross temporal information found in the stimulating waveforms (periodicity, randomness and silence) in a linguistic fashion. Amplitude envelope cues are of significant, but secondary, importance. By providing information that is relatively invisible, the House/3M device can thus serve as an important aid to lipreading, even though it relies primarily on the temporal structure of the stimulating waveform. All implant systems, including multi-channel ones, might benefit from the appropriate exploitation of such temporal features.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Ching, Yuk Ching. "Lipreading Cantonese with voice pitch". Journal of the Acoustical Society of America 77, S1 (aprile 1985): S39—S40. http://dx.doi.org/10.1121/1.2022317.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

MYSLOBODSKY, MICHAEL S., TERRY GOLDBERG, FERN JOHNSON, LESLIE HICKS e DANIEL R. WEINBERGER. "Lipreading in Patients with Schizophrenia". Journal of Nervous and Mental Disease 180, n. 3 (marzo 1992): 168–71. http://dx.doi.org/10.1097/00005053-199203000-00004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Yu, Keren, Xiaoyi Jiang e Horst Bunke. "Lipreading: A classifier combination approach". Pattern Recognition Letters 18, n. 11-13 (novembre 1997): 1421–26. http://dx.doi.org/10.1016/s0167-8655(97)00113-x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Guoying Zhao, M. Barnard e M. Pietikainen. "Lipreading With Local Spatiotemporal Descriptors". IEEE Transactions on Multimedia 11, n. 7 (novembre 2009): 1254–65. http://dx.doi.org/10.1109/tmm.2009.2030637.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Farrimond, Thomas. "Effect of Encouragement on Performance of Young and Old Subjects on a Task Involving Lipreading". Psychological Reports 65, n. 3_suppl2 (dicembre 1989): 1247–50. http://dx.doi.org/10.2466/pr0.1989.65.3f.1247.

Testo completo
Abstract (sommario):
Two tests of lipreading ability were constructed, one using numbers and the other of sentences including visual cues. The tests were given to two groups of men, one older group aged 40 yr. and over ( n = 110) and a younger group of less than 40 yr. ( n = 70). Requests to guess produced a higher mean score for the older subjects on the lipreading tests containing the greater amount of information. It is suggested that differences in the effect of encouragement on performance between young and old may be related to both age and cultural factors.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Salik, Khwaja Mohd, Swati Aggarwal, Yaman Kumar, Rajiv Ratn Shah, Rohit Jain e Roger Zimmermann. "Lipper: Speaker Independent Speech Synthesis Using Multi-View Lipreading". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 10023–24. http://dx.doi.org/10.1609/aaai.v33i01.330110023.

Testo completo
Abstract (sommario):
Lipreading is the process of understanding and interpreting speech by observing a speaker’s lip movements. In the past, most of the work in lipreading has been limited to classifying silent videos to a fixed number of text classes. However, this limits the applications of the lipreading since human language cannot be bound to a fixed set of words or languages. The aim of this work is to reconstruct intelligible acoustic speech signals from silent videos from various poses of a person which Lipper has never seen before. Lipper, therefore is a vocabulary and language agnostic, speaker independent and a near real-time model that deals with a variety of poses of a speaker. The model leverages silent video feeds from multiple cameras recording a subject to generate intelligent speech of a speaker. It uses a deep learning based STCNN+BiGRU architecture to achieve this goal. We evaluate speech reconstruction for speaker independent scenarios and demonstrate the speech output by overlaying the audios reconstructed by Lipper on the corresponding videos.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

YU, KEREN, XIAOYI JIANG e HORST BUNKE. "SENTENCE LIPREADING USING HIDDEN MARKOV MODEL WITH INTEGRATED GRAMMAR". International Journal of Pattern Recognition and Artificial Intelligence 15, n. 01 (febbraio 2001): 161–76. http://dx.doi.org/10.1142/s0218001401000770.

Testo completo
Abstract (sommario):
In this paper, we describe a systematic approach to the lipreading of whole sentences. A vocabulary of elementary words is considered. Based on the vocabulary, we define a grammar that generates a set of legal sentences. Our lipreading approach is based on a combination of the grammar with hidden Markov models (HMMs). Two different experiments were conducted. In the first experiment a set of e-mail commands is considered, while the set of sentences in the second experiment is given by all English integer numbers up to one million. Both experiments showed promising results, regarding the difficulty of the considered task.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Clark, G. M., B. C. Pyman, R. L. Webb, B. K.-H. G. Franz, T. J. Redhead e R. K. Shepherd. "Surgery for the Safe Insertion and Reinsertion of the Banded Electrode Array". Annals of Otology, Rhinology & Laryngology 96, n. 1_suppl (gennaio 1987): 10–12. http://dx.doi.org/10.1177/00034894870960s102.

Testo completo
Abstract (sommario):
Adhering to the surgical technique outlined in the protocol for the Nucleus implant has resulted in over 100 patients worldwide obtaining significant benefit from multichannel stimulation. A detailed analysis of the results in 40 patients shows that it improves their awareness of environmental sounds and their abilities in understanding running speech when combined with lipreading. In addition, one third to one half of the patients also understand significant amounts of running speech without lipreading and some can have interactive conversations over the telephone. It is clear that any insertion trauma is not significant, which is confirmed by the excellent clinical results.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Caron, Cora Jirschik, Coriandre Vilain, Jean-Luc Schwartz, Clémence Bayard, Axelle Calcus, Jacqueline Leybaert e Cécile Colin. "The Effect of Cued-Speech (CS) Perception on Auditory Processing in Typically Hearing (TH) Individuals Who Are Either Naïve or Experienced CS Producers". Brain Sciences 13, n. 7 (7 luglio 2023): 1036. http://dx.doi.org/10.3390/brainsci13071036.

Testo completo
Abstract (sommario):
Cued Speech (CS) is a communication system that uses manual gestures to facilitate lipreading. In this study, we investigated how CS information interacts with natural speech using Event-Related Potential (ERP) analyses in French-speaking, typically hearing adults (TH) who were either naïve or experienced CS producers. The audiovisual (AV) presentation of lipreading information elicited an amplitude attenuation of the entire N1 and P2 complex in both groups, accompanied by N1 latency facilitation in the group of CS producers. Adding CS gestures to lipread information increased the magnitude of effects observed at the N1 time window, but did not enhance P2 amplitude attenuation. Interestingly, presenting CS gestures without lipreading information yielded distinct response patterns depending on participants’ experience with the system. In the group of CS producers, AV perception of CS gestures facilitated the early stage of speech processing, while in the group of naïve participants, it elicited a latency delay at the P2 time window. These results suggest that, for experienced CS users, the perception of gestures facilitates early stages of speech processing, but when people are not familiar with the system, the perception of gestures impacts the efficiency of phonological decoding.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Updike, Claudia D., Roberta L. Albertson, Cathy M. German e Joanne M. Ward. "Evaluation of the Craig Lipreading Inventory". Perceptual and Motor Skills 70, n. 3_suppl (giugno 1990): 1271–82. http://dx.doi.org/10.2466/pms.1990.70.3c.1271.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Demorest, Marilyn E., Lynne E. Bernstein e Silvio P. Eberhardt. "Reliability of individual differences in lipreading". Journal of the Acoustical Society of America 82, S1 (novembre 1987): S24. http://dx.doi.org/10.1121/1.2024715.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Matthews, I., T. F. Cootes, J. A. Bangham, S. Cox e R. Harvey. "Extraction of visual features for lipreading". IEEE Transactions on Pattern Analysis and Machine Intelligence 24, n. 2 (2002): 198–213. http://dx.doi.org/10.1109/34.982900.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Yu, Keren, Xiaoyi Jiang e Horst Bunke. "Lipreading using signal analysis over time". Signal Processing 77, n. 2 (settembre 1999): 195–208. http://dx.doi.org/10.1016/s0165-1684(99)00032-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Chen, Xuejuan, Jixiang Du e Hongbo Zhang. "Lipreading with DenseNet and resBi-LSTM". Signal, Image and Video Processing 14, n. 5 (24 gennaio 2020): 981–89. http://dx.doi.org/10.1007/s11760-019-01630-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Mase, Kenji, e Alex Pentland. "Automatic lipreading by optical-flow analysis". Systems and Computers in Japan 22, n. 6 (1991): 67–76. http://dx.doi.org/10.1002/scj.4690220607.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Bernstein, Lynne E. "Response Errors in Females’ and Males’ Sentence Lipreading Necessitate Structurally Different Models for Predicting Lipreading Accuracy". Language Learning 68 (26 febbraio 2018): 127–58. http://dx.doi.org/10.1111/lang.12281.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Cohen, N. L., S. B. Waltzman e W. Shapiro. "Multichannel Cochlear Implant: The New York University/Bellevue Experience". Annals of Otology, Rhinology & Laryngology 96, n. 1_suppl (gennaio 1987): 139–40. http://dx.doi.org/10.1177/00034894870960s177.

Testo completo
Abstract (sommario):
A total of nine patients have been implanted at the New York University/Bellevue Medical Center with the Nucleus multichannel cochlear implant. The patients ranged in age from 21 to 62 years, with a mean age of 38.7 years. All were postlingually deafened with bilateral profound sensorineural hearing loss, and were unable to benefit from appropriate amplification. Each patient was implanted with the 22-electrode array inserted into the scala tympani, using the facial recess technique. Seven of the nine patients have functioning 22-channel systems, whereas one patient has a single-channel system and one had 14 electrodes inserted because of an unsuspected obstruction in the scala tympani. All patients are regular users of the device and none have been lost to follow-up. Seven patients have completed the prescribed Nucleus training program, and two patients are in the early stages of training. All nine patients have shown a restoration of hearing sensation in response to acoustic stimuli and a recognition of a wide variety of environmental sounds. All seven patients who have completed training and are using the multichannel stimulation have shown an improvement in their vowel and consonant recognition scores when the implant is used in conjunction with lipreading. Mean speech-tracking scores for these patients show an improvement from lipreading alone to lipreading with implant of 28.8 to 60.6 words per minute. Patients also demonstrated a consistent increased ability to use suprasegmental information and to obtain closed set work recognition on portions of the Minimal Auditory Capabilities test battery. Several of the patients have shown an ability to understand significant amounts of open set speech without lipreading. Two patients can comprehend noncoded telephone conversation; one scores an average of 42% on open set speech discrimination testing and the other 20% using the W22 word list with audition only.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Bear, Helen L., e Richard Harvey. "Alternative Visual Units for an Optimized Phoneme-Based Lipreading System". Applied Sciences 9, n. 18 (15 settembre 2019): 3870. http://dx.doi.org/10.3390/app9183870.

Testo completo
Abstract (sommario):
Lipreading is understanding speech from observed lip movements. An observed series of lip motions is an ordered sequence of visual lip gestures. These gestures are commonly known, but as yet are not formally defined, as `visemes’. In this article, we describe a structured approach which allows us to create speaker-dependent visemes with a fixed number of visemes within each set. We create sets of visemes for sizes two to 45. Each set of visemes is based upon clustering phonemes, thus each set has a unique phoneme-to-viseme mapping. We first present an experiment using these maps and the Resource Management Audio-Visual (RMAV) dataset which shows the effect of changing the viseme map size in speaker-dependent machine lipreading and demonstrate that word recognition with phoneme classifiers is possible. Furthermore, we show that there are intermediate units between visemes and phonemes which are better still. Second, we present a novel two-pass training scheme for phoneme classifiers. This approach uses our new intermediary visual units from our first experiment in the first pass as classifiers; before using the phoneme-to-viseme maps, we retrain these into phoneme classifiers. This method significantly improves on previous lipreading results with RMAV speakers.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Collins, M. Jane, e Richard R. Hurtig. "Categorical Perception of Speech Sounds via the Tactile Mode". Journal of Speech, Language, and Hearing Research 28, n. 4 (dicembre 1985): 594–98. http://dx.doi.org/10.1044/jshr.2804.594.

Testo completo
Abstract (sommario):
The usefulness of tactile devices as aids to lipreading has been established. However, maximum usefulness in reducing the ambiguity of lipreading cues and/or use of tactile devices as a substitute for audition may be dependent on phonemic recognition via tactile signals alone. In the present study, a categorical perception paradigm was used to evaluate tactile perception of speech sounds in comparison to auditory perception. The results show that speech signals delivered by tactile stimulation can be categorically perceived on a voice-onset time (VOT) continuum. The boundary for the voiced-voiceless distinction falls at longer VOTs for tactile than for auditory perception. It is concluded that the procedure is useful for determining characteristics of tactile perception and for prosthesis evaluation.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Hygge, Staffan, Jerker Rönnberg, Birgitta Larsby e Stig Arlinger. "Normal-Hearing and Hearing- Impaired Subjects' Ability to Just Follow Conversation in Competing Speech, Reversed Speech, and Noise Backgrounds". Journal of Speech, Language, and Hearing Research 35, n. 1 (febbraio 1992): 208–15. http://dx.doi.org/10.1044/jshr.3501.208.

Testo completo
Abstract (sommario):
The performance on a conversation-following task by 24 hearing-impaired persons was compared with that of 24 matched controls with normal hearing in the presence of three background noises: (a) speech-spectrum random noise, (b) a male voice, and (c) the male voice played in reverse. The subjects’ task was to readjust the sound level of a female voice (signal), every time the signal voice was attenuated, to the subjective level at which it was just possible to understand what was being said. To assess the benefit of lipreading, half of the material was presented audiovisually and half auditorily only. It was predicted that background speech would have a greater masking effect than reversed speech, which would In turn have a lesser masking effect than random noise. It was predicted that hearing-impaired subjects would perform more poorly than the normal-hearing controls in a background of speech. The influence of lipreading was expected to be constant across groups and conditions. The results showed that the hearing-impaired subjects were equally affected by the three background noises and that normal-hearing persons were less affected by the background speech than by noise. The performance of the normal-hearing persons was superior to that of the hearing-impaired subjects. The prediction about lipreading was confirmed. The results were explained in terms of the reduced temporal resolution by the hearing-impaired subjects.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Montgomery, Allen A., Brian E. Walden e Robert A. Prosek. "Effects of Consonantal Context on Vowel Lipreading". Journal of Speech, Language, and Hearing Research 30, n. 1 (marzo 1987): 50–59. http://dx.doi.org/10.1044/jshr.3001.50.

Testo completo
Abstract (sommario):
The effects of consonantal context on vowel lipreading were assessed for 30 adults with mild-to-moderate sensorineural hearing loss who lipread videotape recordings of two female talkers. The stimuli were the vowels /i, I ,, U ,u/ in symmetric CVC form with the consonants /p,b,f,v,t,d,∫,g/ and in the asymmetric consonantal contexts /h/-V-/g/, /w/-V-/g/, /r/-V-/g/. Analyses of the confusion matrices from each talker indicated that vowel intelligibility was significantly poorer in most contexts involving highly visible consonants, although the utterances of one talker were highly intelligible in the bilabial context. Among the visible contexts, the fricative and labiodental contexts in particular produced the lowest vowel intelligibility regardless of talker. Lax vowels were consistently more difficult to perceive than tense vowels. Implications for talker selection and refinement of the concept of viseme were drawn.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Hao, Mingfeng, Mutallip Mamut, Nurbiya Yadikar, Alimjan Aysa e Kurban Ubul. "A Survey of Research on Lipreading Technology". IEEE Access 8 (2020): 204518–44. http://dx.doi.org/10.1109/access.2020.3036865.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Demorest, Marilyn E., Lynne E. Bernstein, Silvio P. Eberhardt e Gale P. De Haven. "An analysis of errors in lipreading sentences". Journal of the Acoustical Society of America 89, n. 4B (aprile 1991): 1958. http://dx.doi.org/10.1121/1.2029672.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Vroomen, Jean, e Beatrice de Gelder. "Lipreading and the compensation for coarticulation mechanism". Language and Cognitive Processes 16, n. 5-6 (ottobre 2001): 661–72. http://dx.doi.org/10.1080/01690960143000092.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia