To see the other types of publications on this topic, follow the link: Speech prediction EEG.

Journal articles on the topic 'Speech prediction EEG'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Speech prediction EEG.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Anderson, Andrew J., Chris Davis, and Edmund C. Lalor. "Deep-learning models reveal how context and listener attention shape electrophysiological correlates of speech-to-language transformation." PLOS Computational Biology 20, no. 11 (2024): e1012537. http://dx.doi.org/10.1371/journal.pcbi.1012537.

Full text
Abstract:
To transform continuous speech into words, the human brain must resolve variability across utterances in intonation, speech rate, volume, accents and so on. A promising approach to explaining this process has been to model electroencephalogram (EEG) recordings of brain responses to speech. Contemporary models typically invoke context invariant speech categories (e.g. phonemes) as an intermediary representational stage between sounds and words. However, such models may not capture the complete picture because they do not model the brain mechanism that categorizes sounds and consequently may ove
APA, Harvard, Vancouver, ISO, and other styles
2

Maki, Hayato, Sakriani Sakti, Hiroki Tanaka, and Satoshi Nakamura. "Quality prediction of synthesized speech based on tensor structured EEG signals." PLOS ONE 13, no. 6 (2018): e0193521. http://dx.doi.org/10.1371/journal.pone.0193521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ishii, Chikara, Hiroki Watanabe, Yasushi Naruse, and Aya S. Ihara. "Prediction of mutual satisfaction in natural conversation using EEG and speech behavior." Proceedings of the Annual Convention of the Japanese Psychological Association 88 (2024): 1C—059—PG—1C—059—PG. https://doi.org/10.4992/pacjpa.88.0_1c-059-pg.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wikman, Patrik, Viljami Salmela, Eetu Sjöblom, Miika Leminen, Matti Laine, and Kimmo Alho. "Attention to audiovisual speech shapes neural processing through feedback-feedforward loops between different nodes of the speech network." PLOS Biology 22, no. 3 (2024): e3002534. http://dx.doi.org/10.1371/journal.pbio.3002534.

Full text
Abstract:
Selective attention-related top-down modulation plays a significant role in separating relevant speech from irrelevant background speech when vocal attributes separating concurrent speakers are small and continuously evolving. Electrophysiological studies have shown that such top-down modulation enhances neural tracking of attended speech. Yet, the specific cortical regions involved remain unclear due to the limited spatial resolution of most electrophysiological techniques. To overcome such limitations, we collected both electroencephalography (EEG) (high temporal resolution) and functional m
APA, Harvard, Vancouver, ISO, and other styles
5

Rogachev, A. O., and O. V. Sysoeva. "Neural tracking of natural speech listening in children: temporal response function (TRF) approach." Genes & Cells 18, no. 4 (2023): 640–44. http://dx.doi.org/10.17816/gc623394.

Full text
Abstract:
Speech development is crucial for a child’s mental growth. Moreover, speech development significantly impacts a child’s educational and professional achievements. It enables the child to interact with the external environment and develop self-awareness and behavioral skills. Thus, the study of the mechanisms of speech development disorders and the development of diagnostic and remediation strategies is essential. Numerous cognitive and neurophysiological investigations into speech and its associated disorders among children are presently being conducted. Electroencephalography (EEG) studies de
APA, Harvard, Vancouver, ISO, and other styles
6

Gibson, Jerry. "Entropy Power, Autoregressive Models, and Mutual Information." Entropy 20, no. 10 (2018): 750. http://dx.doi.org/10.3390/e20100750.

Full text
Abstract:
Autoregressive processes play a major role in speech processing (linear prediction), seismic signal processing, biological signal processing, and many other applications. We consider the quantity defined by Shannon in 1948, the entropy rate power, and show that the log ratio of entropy powers equals the difference in the differential entropy of the two processes. Furthermore, we use the log ratio of entropy powers to analyze the change in mutual information as the model order is increased for autoregressive processes. We examine when we can substitute the minimum mean squared prediction error
APA, Harvard, Vancouver, ISO, and other styles
7

Sohoglu, Ediz, and Matthew H. Davis. "Perceptual learning of degraded speech by minimizing prediction error." Proceedings of the National Academy of Sciences 113, no. 12 (2016): E1747—E1756. http://dx.doi.org/10.1073/pnas.1523266113.

Full text
Abstract:
Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent ma
APA, Harvard, Vancouver, ISO, and other styles
8

Da, Silva Souto Carlos F., Wiebke Pätzold, Marina Paul, Stefan Debener, and Karen Insa Wolf. "Pre-gelled Electrode Grid for Self-Applied EEG Sleep Monitoring at Home." Frontiers in Neuroscience 16 (June 5, 2022): 1–11. https://doi.org/10.3389/fnins.2022.883966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shen, Stanley, Jess R. Kerlin, Heather Bortfeld, and Antoine J. Shahin. "The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study." Brain Sciences 10, no. 11 (2020): 810. http://dx.doi.org/10.3390/brainsci10110810.

Full text
Abstract:
The efficacy of audiovisual (AV) integration is reflected in the degree of cross-modal suppression of the auditory event-related potentials (ERPs, P1-N1-P2), while stronger semantic encoding is reflected in enhanced late ERP negativities (e.g., N450). We hypothesized that increasing visual stimulus reliability should lead to more robust AV-integration and enhanced semantic prediction, reflected in suppression of auditory ERPs and enhanced N450, respectively. EEG was acquired while individuals watched and listened to clear and blurred videos of a speaker uttering intact or highly-intelligible d
APA, Harvard, Vancouver, ISO, and other styles
10

Teixeira, Felipe Lage, Miguel Rocha e. Costa, José Pio Abreu, Manuel Cabral, Salviano Pinto Soares, and João Paulo Teixeira. "A Narrative Review of Speech and EEG Features for Schizophrenia Detection: Progress and Challenges." Bioengineering 10, no. 4 (2023): 493. http://dx.doi.org/10.3390/bioengineering10040493.

Full text
Abstract:
Schizophrenia is a mental illness that affects an estimated 21 million people worldwide. The literature establishes that electroencephalography (EEG) is a well-implemented means of studying and diagnosing mental disorders. However, it is known that speech and language provide unique and essential information about human thought. Semantic and emotional content, semantic coherence, syntactic structure, and complexity can thus be combined in a machine learning process to detect schizophrenia. Several studies show that early identification is crucial to prevent the onset of illness or mitigate pos
APA, Harvard, Vancouver, ISO, and other styles
11

Sriraam, N. "EEG Based Thought Translator." International Journal of Biomedical and Clinical Engineering 2, no. 1 (2013): 50–62. http://dx.doi.org/10.4018/ijbce.2013010105.

Full text
Abstract:
A brain computer interface is a communication system that translates brain activities into commands for a computer. For physically disabled people, who cannot express their needs through verbal mode (such as thirst, appetite etc), a brain-computer interface (BCI) is the only feasible channel for communicating with others. This technology has the capability of providing substantial independence and hence, a greatly improved quality of life for the physically disabled persons. The BCI technique utilizes electrical brain potentials to directly communicate to devices such as a personal computer sy
APA, Harvard, Vancouver, ISO, and other styles
12

Chandurkar, Swati S., Shailaja V. Pede, and Shailesh A. Chandurkar. "System for Prediction of Human Emotions and Depression level with Recommendation of Suitable Therapy." Asian Journal of Computer Science and Technology 6, no. 2 (2017): 5–12. http://dx.doi.org/10.51983/ajcst-2017.6.2.1787.

Full text
Abstract:
In today’s competitive world, an individual needs to act smartly and take rapid steps to make his place in the competition. The ratio of the youngsters to that of the elder people is comparatively more and also they contribute towards the development of the society. This paper presents the methodology to extract emotion from the text at real time and add the expression to the textual contents during speech synthesis by using Corpus , emotion recognition module etc. Along with the emotions recognition from the human textual data the system will analyze the various human body signals such as blo
APA, Harvard, Vancouver, ISO, and other styles
13

Attaheri, Adam, Áine Ní Choisdealbha, Sinead Rocha, et al. "Infant low-frequency EEG cortical power, cortical tracking and phase-amplitude coupling predicts language a year later." PLOS ONE 19, no. 12 (2024): e0313274. https://doi.org/10.1371/journal.pone.0313274.

Full text
Abstract:
Cortical signals have been shown to track acoustic and linguistic properties of continuous speech. This phenomenon has been measured in both children and adults, reflecting speech understanding by adults as well as cognitive functions such as attention and prediction. Furthermore, atypical low-frequency cortical tracking of speech is found in children with phonological difficulties (developmental dyslexia). Accordingly, low-frequency cortical signals may play a critical role in language acquisition. A recent investigation with infants Attaheri et al., 2022 [1] probed cortical tracking mechanis
APA, Harvard, Vancouver, ISO, and other styles
14

Weissbart, Hugo, Katerina D. Kandylaki, and Tobias Reichenbach. "Cortical Tracking of Surprisal during Continuous Speech Comprehension." Journal of Cognitive Neuroscience 32, no. 1 (2020): 155–66. http://dx.doi.org/10.1162/jocn_a_01467.

Full text
Abstract:
Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as of the precision of such a prediction. However, they have focused on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension, a listener hears many successive words whose predictabilit
APA, Harvard, Vancouver, ISO, and other styles
15

MacGregor, Lucy J., Jennifer M. Rodd, Rebecca A. Gilbert, Olaf Hauk, Ediz Sohoglu, and Matthew H. Davis. "The Neural Time Course of Semantic Ambiguity Resolution in Speech Comprehension." Journal of Cognitive Neuroscience 32, no. 3 (2020): 403–25. http://dx.doi.org/10.1162/jocn_a_01493.

Full text
Abstract:
Semantically ambiguous words challenge speech comprehension, particularly when listeners must select a less frequent (subordinate) meaning at disambiguation. Using combined magnetoencephalography (MEG) and EEG, we measured neural responses associated with distinct cognitive operations during semantic ambiguity resolution in spoken sentences: (i) initial activation and selection of meanings in response to an ambiguous word and (ii) sentence reinterpretation in response to subsequent disambiguation to a subordinate meaning. Ambiguous words elicited an increased neural response approximately 400–
APA, Harvard, Vancouver, ISO, and other styles
16

Cimtay, Yucel, and Erhan Ekmekcioglu. "Investigating the Use of Pretrained Convolutional Neural Network on Cross-Subject and Cross-Dataset EEG Emotion Recognition." Sensors 20, no. 7 (2020): 2034. http://dx.doi.org/10.3390/s20072034.

Full text
Abstract:
The electroencephalogram (EEG) has great attraction in emotion recognition studies due to its resistance to deceptive actions of humans. This is one of the most significant advantages of brain signals in comparison to visual or speech signals in the emotion recognition context. A major challenge in EEG-based emotion recognition is that EEG recordings exhibit varying distributions for different people as well as for the same person at different time instances. This nonstationary nature of EEG limits the accuracy of it when subject independency is the priority. The aim of this study is to increa
APA, Harvard, Vancouver, ISO, and other styles
17

Moinuddin, Kazi Ashraf, Felix Havugimana, Rakib Al-Fahad, Gavin M. Bidelman, and Mohammed Yeasin. "Unraveling Spatial-Spectral Dynamics of Speech Categorization Speed Using Convolutional Neural Networks." Brain Sciences 13, no. 1 (2022): 75. http://dx.doi.org/10.3390/brainsci13010075.

Full text
Abstract:
The process of categorizing sounds into distinct phonetic categories is known as categorical perception (CP). Response times (RTs) provide a measure of perceptual difficulty during labeling decisions (i.e., categorization). The RT is quasi-stochastic in nature due to individuality and variations in perceptual tasks. To identify the source of RT variation in CP, we have built models to decode the brain regions and frequency bands driving fast, medium and slow response decision speeds. In particular, we implemented a parameter optimized convolutional neural network (CNN) to classify listeners’ b
APA, Harvard, Vancouver, ISO, and other styles
18

Shen, Deju, Yuqin Deng, Chunyan Lin, Jianshu Li, Xuehua Lin, and Chaoning Zou. "Clinical Characteristics and Gene Mutation Analysis of Poststroke Epilepsy." Contrast Media & Molecular Imaging 2022 (August 29, 2022): 1–10. http://dx.doi.org/10.1155/2022/4801037.

Full text
Abstract:
Epilepsy is one of the most common brain disorders worldwide. Poststroke epilepsy (PSE) affects functional retrieval after stroke and brings considerable social values. A stroke occurs when the blood circulation to the brain fails, causing speech difficulties, memory loss, and paralysis. An electroencephalogram (EEG) is a tool that may detect anomalies in brain electrical activity, including those induced by a stroke. Using EEG data to determine the electrical action in the brains of stroke patients is an effort to measure therapy. Hence in this paper, deep learning assisted gene mutation anal
APA, Harvard, Vancouver, ISO, and other styles
19

Strauß, Antje, Sonja A. Kotz, and Jonas Obleser. "Narrowed Expectancies under Degraded Speech: Revisiting the N400." Journal of Cognitive Neuroscience 25, no. 8 (2013): 1383–95. http://dx.doi.org/10.1162/jocn_a_00389.

Full text
Abstract:
Under adverse listening conditions, speech comprehension profits from the expectancies that listeners derive from the semantic context. However, the neurocognitive mechanisms of this semantic benefit are unclear: How are expectancies formed from context and adjusted as a sentence unfolds over time under various degrees of acoustic degradation? In an EEG study, we modified auditory signal degradation by applying noise-vocoding (severely degraded: four-band, moderately degraded: eight-band, and clear speech). Orthogonal to that, we manipulated the extent of expectancy: strong or weak semantic co
APA, Harvard, Vancouver, ISO, and other styles
20

Voitenkov, V. B., A. B. A. B. Palchick, N. A. Savelieva, and E. P. Bogdanova. "Bioelectric activity of the brain in 3-4 years old children in eyes-open resting state." Translational Medicine 8, no. 4 (2021): 47–56. http://dx.doi.org/10.18705/2311-4495-2021-8-4-47-56.

Full text
Abstract:
Background. Electroencephalography is the main technique for assessing the functional state of the brain. Indications for EEG are diagnosis of paroxysmal states, prediction of the outcome of a pathological state, evaluation of bioelectrical activity if brain death is suspected. Up to 90 % of the native EEG in calm wakefulness in healthy individuals is occupied by “alpha activity”. In children in active wakefulness, the EEG pattern depends to a great extent on their age.Objective. The aim of the work was to assess EEG parameters in children aged 3–4 years in eyes-open resting state. Design and
APA, Harvard, Vancouver, ISO, and other styles
21

Ávila-Cascajares, Fátima, Clara Waleczek, Sophie Kerres, Boris Suchan, and Christiane Völter. "Cross-Modal Plasticity in Postlingual Hearing Loss Predicts Speech Perception Outcomes After Cochlear Implantation." Journal of Clinical Medicine 13, no. 23 (2024): 7016. http://dx.doi.org/10.3390/jcm13237016.

Full text
Abstract:
Background: Sensory loss may lead to intra- and cross-modal cortical reorganization. Previous research showed a significant correlation between the cross-modal contribution of the right auditory cortex to visual evoked potentials (VEP) and speech perception in cochlear implant (CI) users with prelingual hearing loss (HL), but not in those with postlingual HL. The present study aimed to explore the cortical reorganization induced by postlingual HL, particularly in the right temporal region, and how it correlates with speech perception outcome with a CI. Material and Methods: A total of 53 adult
APA, Harvard, Vancouver, ISO, and other styles
22

Goller, Lisa, Michael Schwartze, Ana Pinheiro, and Sonja Kotz. "M52. VOICES IN THE HEAD: AUDITORY VERBAL HALLUCINATIONS (AVH) IN HEALTHY INDIVIDUALS." Schizophrenia Bulletin 46, Supplement_1 (2020): S153—S154. http://dx.doi.org/10.1093/schbul/sbaa030.364.

Full text
Abstract:
Abstract Background Auditory verbal hallucinations (AVH) are conscious sensory experiences occurring in the absence of external stimulation. AVH are experienced by 75% of individuals diagnosed with schizophrenia and can manifest in other neuropsychiatric disorders. However, AVH are also reported amongst healthy individuals. This implies that hearing voices is not necessarily linked to psychopathology. Amongst voice hearers, the likelihood of AVH seems to reflect individual differences in hallucination proneness (HP). The HP construct allows placing individuals on a psychosis continuum ranging
APA, Harvard, Vancouver, ISO, and other styles
23

Shidlovskaya, Tetiana, Tamara Shidlovskaya, Nikolay Kozak та Lyubov Petruk. "Statе of bioelectric activity of the brain in persons who received acoustic trauma in area of combat actions with a different stage of disorders in the auditory system". OTORHINOLARYNGOLOGY, № 1(1) 2018 (27 березня 2018): 17–25. http://dx.doi.org/10.37219/2528-8253-2018-1-17.

Full text
Abstract:
Topicality: Providing medical care to patients with combat acoustic trauma remains a topical issue of military medicine. There are works in the literature that show changes in the central nervous system under the influence of intense noise and at acoustic trauma, however, only in individual studies this objective assessment of the functional state of the central nervous system in patients with sensorineural hearing loss is shown as well as the promising use of them. Aim: is to determine the most significant indicators of bioelectric activity of the brain according to the EEG in terms of predic
APA, Harvard, Vancouver, ISO, and other styles
24

Schädler, Marc René. "Interactive spatial speech recognition maps based on simulated speech recognition experiments." Acta Acustica 6 (2022): 31. http://dx.doi.org/10.1051/aacus/2022028.

Full text
Abstract:
In their everyday life, the speech recognition performance of human listeners is influenced by diverse factors, such as the acoustic environment, the talker and listener positions, possibly impaired hearing, and optional hearing devices. Prediction models come closer to considering all required factors simultaneously to predict the individual speech recognition performance in complex, that is, e.g. multi-source dynamic, acoustic environments. While such predictions may still not be sufficiently accurate for serious applications, such as, e.g. individual hearing aid fitting, they can already be
APA, Harvard, Vancouver, ISO, and other styles
25

Accou, Bernd, Mohammad Jalilpour Monesi, Hugo Van hamme, and Tom Francart. "Predicting speech intelligibility from EEG in a non-linear classification paradigm *." Journal of Neural Engineering 18, no. 6 (2021): 066008. http://dx.doi.org/10.1088/1741-2552/ac33e9.

Full text
Abstract:
Abstract Objective. Currently, only behavioral speech understanding tests are available, which require active participation of the person being tested. As this is infeasible for certain populations, an objective measure of speech intelligibility is required. Recently, brain imaging data has been used to establish a relationship between stimulus and brain response. Linear models have been successfully linked to speech intelligibility but require per-subject training. We present a deep-learning-based model incorporating dilated convolutions that operates in a match/mismatch paradigm. The accurac
APA, Harvard, Vancouver, ISO, and other styles
26

Nogueira, Waldo, and Hanna Dolhopiatenko. "Predicting speech intelligibility from a selective attention decoding paradigm in cochlear implant users." Journal of Neural Engineering 19, no. 2 (2022): 026037. http://dx.doi.org/10.1088/1741-2552/ac599f.

Full text
Abstract:
Abstract Objectives. Electroencephalography (EEG) can be used to decode selective attention in cochlear implant (CI) users. This work investigates if selective attention to an attended speech source in the presence of a concurrent speech source can predict speech understanding in CI users. Approach. CI users were instructed to attend to one out of two speech streams while EEG was recorded. Both speech streams were presented to the same ear and at different signal to interference ratios (SIRs). Speech envelope reconstruction of the to-be-attended speech from EEG was obtained by training decoder
APA, Harvard, Vancouver, ISO, and other styles
27

Moon, Ki Woong. "Preceding word information for predicting speech errors in English as foreign language speech." Journal of the Acoustical Society of America 155, no. 3_Supplement (2024): A270. http://dx.doi.org/10.1121/10.0027464.

Full text
Abstract:
Speech errors, including disfluency errors (e.g., filled pauses (“uh”, “um”), repetition (“I me-mean right now.”)), and mispronunciation of speech segments (e.g., “think” as /sɪŋk/) are natural occurrences in speech production and they can affect speech fluency and proficiency. Detecting these errors is important, especially in assessing second language (L2) learners. Non-native speakers often produce speech errors, even in read speech, due to increased cognitive load when simultaneously producing the current word and processing the upcoming word. By analyzing two L2 speech corpora having diff
APA, Harvard, Vancouver, ISO, and other styles
28

Summers, Van, Ken W. Grant, Brian E. Walden, Mary T. Cord, Rauna K. Surr, and Mounya Elhilali. "Evaluation of A “Direct-Comparison” Approach to Automatic Switching In Omnidirectional/Directional Hearing Aids." Journal of the American Academy of Audiology 19, no. 09 (2008): 708–20. http://dx.doi.org/10.3766/jaaa.19.9.6.

Full text
Abstract:
Background: Hearing aids today often provide both directional (DIR) and omnidirectional (OMNI) processing options with the currently active mode selected automatically by the device. The most common approach to automatic switching involves “acoustic scene analysis” where estimates of various acoustic properties of the listening environment (e.g., signal-to-noise ratio [SNR], overall sound level) are used as a basis for switching decisions. Purpose: The current study was carried out to evaluate an alternative, “direct-comparison” approach to automatic switching that does not involve assumptions
APA, Harvard, Vancouver, ISO, and other styles
29

Taillez, Tobias de, Florian Denk, Bojana Mirkovic, Birger Kollmeier, and Bernd T. Meyer. "Modeling Nonlinear Transfer Functions from Speech Envelopes to Encephalography with Neural Networks." International Journal of Psychological Studies 11, no. 4 (2019): 1. http://dx.doi.org/10.5539/ijps.v11n4p1.

Full text
Abstract:
Diferent linear models have been proposed to establish a link between an auditory stimulus and the neurophysiological response obtained through electroencephalography (EEG). We investigate if non-linear mappings can be modeled with deep neural networks trained on continuous speech envelopes and EEG data obtained in an auditory attention two-speaker scenario. An artificial neural network was trained to predict the EEG response related to the attended and unattended speech envelopes. After training, the properties of the DNN-based model are analyzed by measuring the transfer function between inp
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Sarah R., Alex Knapp, Jing Tang, Suzanne Boyce, and T. Douglas Mast. "Predicting vocal tract shape information from tongue contours and audio using neural networks." Journal of the Acoustical Society of America 155, no. 3_Supplement (2024): A338. http://dx.doi.org/10.1121/10.0027740.

Full text
Abstract:
Midsagittal ultrasound imaging of the tongue is a portable and inexpensive way to provide articulatory information. However, although ultrasound images show a portion of the tongue surface, other vocal tract structures (e.g., palate) are not typically visible. This missing information may be useful for speech therapy and other applications, e.g., by characterizing vocal tract constrictions and informing how morphological variations affect speech patterns. Prediction of the vocal tract shape from information available during ultrasound imaging (e.g., tongue contours and audio recordings) is, th
APA, Harvard, Vancouver, ISO, and other styles
31

BinKhamis, Ghada, Antonio Elia Forte, Tobias Reichenbach, Martin O’Driscoll, and Karolina Kluk. "Speech Auditory Brainstem Responses in Adult Hearing Aid Users: Effects of Aiding and Background Noise, and Prediction of Behavioral Measures." Trends in Hearing 23 (January 2019): 233121651984829. http://dx.doi.org/10.1177/2331216519848297.

Full text
Abstract:
Evaluation of patients who are unable to provide behavioral responses on standard clinical measures is challenging due to the lack of standard objective (non-behavioral) clinical audiological measures that assess the outcome of an intervention (e.g., hearing aids). Brainstem responses to short consonant-vowel stimuli (speech-auditory brainstem responses [speech-ABRs]) have been proposed as a measure of subcortical encoding of speech, speech detection, and speech-in-noise performance in individuals with normal hearing. Here, we investigated the potential application of speech-ABRs as an objecti
APA, Harvard, Vancouver, ISO, and other styles
32

de Prada Pérez, Ana. "Theoretical implications of research on bilingual subject production: The Vulnerability Hypothesis." International Journal of Bilingualism 23, no. 2 (2018): 670–94. http://dx.doi.org/10.1177/1367006918763141.

Full text
Abstract:
In this paper we propose a new hypothesis for the formal analysis of cross-linguistic influence, the Vulnerability Hypothesis (VH), with the support of data from subject personal pronoun use in Spanish and Catalan in Minorca, and contrast it to the Interface Hypothesis (IH). The VH establishes a categorical–variable continuum of permeability, that is, structures that show variable distributions are permeable while those that exhibit categorical distributions are not. To test the predictions of the VH, Spanish language samples were collected from 12 monolingual Spanish speakers, 11 Spanish-domi
APA, Harvard, Vancouver, ISO, and other styles
33

Radhakrishnan, Simon M., Amanda M. O'Brien, Thomas Quatieri, and Kristina T. Johnson. "An exploratory investigation of acoustic features underlying arousal and valence perception of vocalizations from non-speaking individuals." Journal of the Acoustical Society of America 155, no. 3_Supplement (2024): A305. http://dx.doi.org/10.1121/10.0027595.

Full text
Abstract:
Emotion perception of vocalizations, especially from individuals with no or few spoken words, remains an underexplored topic in acoustical research. The aim of this exploratory study was to identify acoustic features within non-speech vocalizations correlated with perceived arousal and valence. 364 vocalizations were selected from the open-access ReCANVo dataset, comprising non-speech communicative sounds by non-speaking individuals with autism and neurodevelopmental disorders. 108 listeners independently rated each vocalization for arousal and valence on a 5-point Likert scale (78624 total ra
APA, Harvard, Vancouver, ISO, and other styles
34

Holmes, Emma. "How does voice familiarity affect speech intelligibility?" Journal of the Acoustical Society of America 155, no. 3_Supplement (2024): A263. http://dx.doi.org/10.1121/10.0027437.

Full text
Abstract:
People often face the challenge of understanding speech when other sounds are present ("speech-in-noise perception")—which involves a variety of cognitive processes, such as attention and prior knowledge. We have consistently found that familiarity with a person’s voice improves the ability to understand speech-in-noise, using both naturally familiar (e.g., friends and partners) and lab-trained voices. In this talk, I will describe experiments in which we manipulated voice acoustics (such as fundamental frequency and formant spacing). For example, we have measured the smallest deviations in ac
APA, Harvard, Vancouver, ISO, and other styles
35

Lu, Yuanxun, Jinxiang Chai, and Xun Cao. "Live speech portraits." ACM Transactions on Graphics 40, no. 6 (2021): 1–17. http://dx.doi.org/10.1145/3478513.3480484.

Full text
Abstract:
To the best of our knowledge, we first present a live system that generates personalized photorealistic talking-head animation only driven by audio signals at over 30 fps. Our system contains three stages. The first stage is a deep neural network that extracts deep audio features along with a manifold projection to project the features to the target person's speech space. In the second stage, we learn facial dynamics and motions from the projected audio features. The predicted motions include head poses and upper body motions, where the former is generated by an autoregressive probabilistic mo
APA, Harvard, Vancouver, ISO, and other styles
36

Manisha Chaudhari. "Multimodal Approach in Prediction of Alzheimer’s Disease Using Voice, Transcript Dataset." Journal of Information Systems Engineering and Management 10, no. 44s (2025): 122–28. https://doi.org/10.52783/jisem.v10i44s.8575.

Full text
Abstract:
Introduction Alzheimer’s disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline , memory impairment and impact on language abilities. Early and accurate prediction of AD is critical for effective intervention and management. This study proposes a multimodal approach that integrates heterogeneous data sources—including voice recordings, transcribed speech, textual metadata, and neuroimaging—to enhance prediction accuracy Objectives: The primary objective of this study is to develop and evaluate a multimodal machine learning framework that combines acoustic f
APA, Harvard, Vancouver, ISO, and other styles
37

PETERS, RYAN E., THERES GRÜTER, and ARIELLE BOROVSKY. "Vocabulary size and native speaker self-identification influence flexibility in linguistic prediction among adult bilinguals." Applied Psycholinguistics 39, no. 6 (2018): 1439–69. http://dx.doi.org/10.1017/s0142716418000383.

Full text
Abstract:
AbstractWhen language users predict upcoming speech, they generate pluralistic expectations, weighted by likelihood (Kuperberg & Jaeger, 2016). Many variables influence the prediction of highly likely sentential outcomes, but less is known regarding variables affecting the prediction of less-likely outcomes. Here we explore how English vocabulary size and self-identification as a native speaker (NS) of English modulate adult bi-/multilinguals’ preactivation of less-likely sentential outcomes in two visual-world experiments. Participants heard transitive sentences containing an agent, actio
APA, Harvard, Vancouver, ISO, and other styles
38

Kacur, Juraj, Boris Puterka, Jarmila Pavlovicova, and Milos Oravec. "On the Speech Properties and Feature Extraction Methods in Speech Emotion Recognition." Sensors 21, no. 5 (2021): 1888. http://dx.doi.org/10.3390/s21051888.

Full text
Abstract:
Many speech emotion recognition systems have been designed using different features and classification methods. Still, there is a lack of knowledge and reasoning regarding the underlying speech characteristics and processing, i.e., how basic characteristics, methods, and settings affect the accuracy, to what extent, etc. This study is to extend physical perspective on speech emotion recognition by analyzing basic speech characteristics and modeling methods, e.g., time characteristics (segmentation, window types, and classification regions—lengths and overlaps), frequency ranges, frequency scal
APA, Harvard, Vancouver, ISO, and other styles
39

Du, Yi, and Robert J. Zatorre. "Musical training sharpens and bonds ears and tongue to hear speech better." Proceedings of the National Academy of Sciences 114, no. 51 (2017): 13579–84. http://dx.doi.org/10.1073/pnas.1712223114.

Full text
Abstract:
The idea that musical training improves speech perception in challenging listening environments is appealing and of clinical importance, yet the mechanisms of any such musician advantage are not well specified. Here, using functional magnetic resonance imaging (fMRI), we found that musicians outperformed nonmusicians in identifying syllables at varying signal-to-noise ratios (SNRs), which was associated with stronger activation of the left inferior frontal and right auditory regions in musicians compared with nonmusicians. Moreover, musicians showed greater specificity of phoneme representatio
APA, Harvard, Vancouver, ISO, and other styles
40

Partheeban, Pachaivannan, Krishnamurthy Karthik, Partheeban Navin Elamparithi, Krishnan Somasundaram, and Baskaran Anuradha. "Urban road traffic noise on human exposure assessment using geospatial technology." Environmental Engineering Research 27, no. 5 (2021): 210249–0. http://dx.doi.org/10.4491/eer.2021.249.

Full text
Abstract:
The sounds produced by humans, industries, transport and animals in the atmosphere that pose a threat to the health of humans or animals can be characterized as noise pollution. Adverse effects due to noise exposure can involve speech communication interference and declining learning skills of children. Highway traffic noise contributes to 80% of all noise. It has grown to a massive scale because of growth in population along the roads leading to a rapid change in land use and has evolved into a common reality in various Indian cities. The main objective of this work is to develop a road traff
APA, Harvard, Vancouver, ISO, and other styles
41

Paulraj, M. P., Kamalraj Subramaniam, Sazali Bin Yaccob, Abdul H. Bin Adom, and C. R. Hema. "Auditory Evoked Potential Response and Hearing Loss: A Review." Open Biomedical Engineering Journal 9, no. 1 (2015): 17–24. http://dx.doi.org/10.2174/1874120701509010017.

Full text
Abstract:
Hypoacusis is the most prevalent sensory disability in the world and consequently, it can lead to impede speech in human beings. One best approach to tackle this issue is to conduct early and effective hearing screening test using Electroencephalogram (EEG). EEG based hearing threshold level determination is most suitable for persons who lack verbal communication and behavioral response to sound stimulation. Auditory evoked potential (AEP) is a type of EEG signal emanated from the brain scalp by an acoustical stimulus. The goal of this review is to assess the current state of knowledge in esti
APA, Harvard, Vancouver, ISO, and other styles
42

CRISTIA, ALEJANDRINA, and AMANDA SEIDL. "The hyperarticulation hypothesis of infant-directed speech." Journal of Child Language 41, no. 4 (2013): 913–34. http://dx.doi.org/10.1017/s0305000912000669.

Full text
Abstract:
ABSTRACTTypically, the point vowels [i,ɑ,u] are acoustically more peripheral in infant-directed speech (IDS) compared to adult-directed speech (ADS). If caregivers seek to highlight lexically relevant contrasts in IDS, then two sounds that are contrastive should become more distinct, whereas two sounds that are surface realizations of the same underlying sound category should not. To test this prediction, vowels that are phonemically contrastive ([i–ɪ] and [eɪ–ε]), vowels that map onto the same underlying category ([æ–] and [ε–]), and the point vowels [i,ɑ,u] were elicited in IDS and ADS by Am
APA, Harvard, Vancouver, ISO, and other styles
43

Cho, Sylvia. "Perception of speaker identity for bilingual voices." Journal of the Acoustical Society of America 155, no. 3_Supplement (2024): A274. http://dx.doi.org/10.1121/10.0027483.

Full text
Abstract:
Voice is often described as an “auditory face”; it provides important information concerning speaker identity (e.g., age, height, sex). The acoustic properties related to voice can also vary substantially within a speaker based on one’s emotional, social, and linguistic states. Recent work suggests that biological components have the greatest impact in the acoustic variability found in voice, followed by language-specific factors and speaking style [Lee & Kreiman, J. Acoust. Soc. Am. 153, A295 (2023)]. The effects of such within- vs. between-speaker acoustic variability on the perception o
APA, Harvard, Vancouver, ISO, and other styles
44

Kaur, Gurpreet, Mohit Srivastava, and Amod Kumar. "Genetic Algorithm for Combined Speaker and Speech Recognition using Deep Neural Networks." Journal of Telecommunications and Information Technology 2 (June 29, 2018): 23–31. http://dx.doi.org/10.26636/jtit.2018.119617.

Full text
Abstract:
Huge growth is observed in the speech and speaker recognition field due to many artificial intelligence algorithms being applied. Speech is used to convey messages via the language being spoken, emotions, gender and speaker identity. Many real applications in healthcare are based upon speech and speaker recognition, e.g. a voice-controlled wheelchair helps control the chair. In this paper, we use a genetic algorithm (GA) for combined speaker and speech recognition, relying on optimized Mel Frequency Cepstral Coefficient (MFCC) speech features, and classification is performed using a Deep Neural Net
APA, Harvard, Vancouver, ISO, and other styles
45

Singer, Cara M., Sango Otieno, Soo-Eun Chang, and Robin M. Jones. "Predicting Persistent Developmental Stuttering Using a Cumulative Risk Approach." Journal of Speech, Language, and Hearing Research 65, no. 1 (2022): 70–95. http://dx.doi.org/10.1044/2021_jslhr-21-00162.

Full text
Abstract:
Purpose: The purpose of this study was to explore how well a cumulative risk approach, based on empirically supported predictive factors, predicts whether a young child who stutters is likely to develop persistent developmental stuttering. In a cumulative risk approach, the number of predictive factors indicating a child is at risk to develop persistent stuttering is evaluated, and a greater number of indicators of risk are hypothesized to confer greater risk of persistent stuttering. Method: We combined extant data on 3- to 5-year-old children who stutter from two longitudinal studies to iden
APA, Harvard, Vancouver, ISO, and other styles
46

Whitlock, James A. T., and George Dodd. "Speech Intelligibility in Classrooms: Specific Acoustical Needs for Primary School Children." Building Acoustics 15, no. 1 (2008): 35–47. http://dx.doi.org/10.1260/135101008784050223.

Full text
Abstract:
Classrooms for primary school children should be built to criteria based on children's speech intelligibility needs which in some respects – e.g. reverberation time – differ markedly from the traditional criteria for adults. To further identify why the needs of children and adults for speech perception are so different we have measured the ‘integration time’ of speech for adults and children using a novel technique to obviate the complicating effects of differing language. The results for children are significantly different than for adults (35 ms c.f. 50 ms) and recommendations for classroom
APA, Harvard, Vancouver, ISO, and other styles
47

FRATONI, Giulia, Domenico DE SALVIO, and Dario D'ORAZIO. "Virtual assessment of phone booths' acoustic performance in laboratory and office environments." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 270, no. 6 (2024): 5668–75. http://dx.doi.org/10.3397/in_2024_3631.

Full text
Abstract:
The irrelevant speech noise is one of the significant issues affecting workers' productivity and comfort. Acoustic furniture is the most common solution addressing this problem: mobile phone booths and partially closed workstations are two examples. The acoustic performance of such devices is defined with laboratory tests according to ISO 23351-1:2020. The sound power level measured in reverberation rooms with and without the product determines the reduction of speech A-weighted sound power level (DS,A). However, predicting acoustic performance in real-world scenarios is still challenging. The
APA, Harvard, Vancouver, ISO, and other styles
48

Muncke, Jan, Ivine Kuruvila, and Ulrich Hoppe. "Prediction of Speech Intelligibility by Means of EEG Responses to Sentences in Noise." Frontiers in Neuroscience 16 (June 1, 2022). http://dx.doi.org/10.3389/fnins.2022.876421.

Full text
Abstract:
ObjectiveUnderstanding speech in noisy conditions is challenging even for people with mild hearing loss, and intelligibility for an individual person is usually evaluated by using several subjective test methods. In the last few years, a method has been developed to determine a temporal response function (TRF) between speech envelope and simultaneous electroencephalographic (EEG) measurements. By using this TRF it is possible to predict the EEG signal for any speech signal. Recent studies have suggested that the accuracy of this prediction varies with the level of noise added to the speech sig
APA, Harvard, Vancouver, ISO, and other styles
49

Ihara, Aya S., Atsushi Matsumoto, Shiro Ojima, et al. "Prediction of Second Language Proficiency Based on Electroencephalographic Signals Measured While Listening to Natural Speech." Frontiers in Human Neuroscience 15 (July 16, 2021). http://dx.doi.org/10.3389/fnhum.2021.665809.

Full text
Abstract:
This study had two goals: to clarify the relationship between electroencephalographic (EEG) features estimated while non-native speakers listened to a second language (L2) and their proficiency in L2 determined by a conventional paper test and to provide a predictive model for L2 proficiency based on EEG features. We measured EEG signals from 205 native Japanese speakers, who varied widely in English proficiency while they listened to natural speech in English. Following the EEG measurement, they completed a conventional English listening test for Japanese speakers. We estimated multivariate t
APA, Harvard, Vancouver, ISO, and other styles
50

ter Bekke, Marlijn, Linda Drijvers, and Judith Holler. "Co-Speech Hand Gestures Are Used to Predict Upcoming Meaning." Psychological Science, April 22, 2025. https://doi.org/10.1177/09567976251331041.

Full text
Abstract:
In face-to-face conversation, people use speech and gesture to convey meaning. Seeing gestures alongside speech facilitates comprehenders’ language processing, but crucially, the mechanisms underlying this facilitation remain unclear. We investigated whether comprehenders use the semantic information in gestures, typically preceding related speech, to predict upcoming meaning. Dutch adults listened to questions asked by a virtual avatar. Questions were accompanied by an iconic gesture (e.g., typing) or meaningless control movement (e.g., arm scratch) followed by a short pause and target word (
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!