Academic literature on the topic 'Auditory attention decoding'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Auditory attention decoding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Auditory attention decoding":

1

Han, Cong, James O’Sullivan, Yi Luo, Jose Herrero, Ashesh D. Mehta, and Nima Mesgarani. "Speaker-independent auditory attention decoding without access to clean speech sources." Science Advances 5, no. 5 (May 2019): eaav6134. http://dx.doi.org/10.1126/sciadv.aav6134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Speech perception in crowded environments is challenging for hearing-impaired listeners. Assistive hearing devices cannot lower interfering speakers without knowing which speaker the listener is focusing on. One possible solution is auditory attention decoding in which the brainwaves of listeners are compared with sound sources to determine the attended source, which can then be amplified to facilitate hearing. In realistic situations, however, only mixed audio is available. We utilize a novel speech separation algorithm to automatically separate speakers in mixed audio, with no need for the speakers to have prior training. Our results show that auditory attention decoding with automatically separated speakers is as accurate and fast as using clean speech sounds. The proposed method significantly improves the subjective and objective quality of the attended speaker. Our study addresses a major obstacle in actualization of auditory attention decoding that can assist hearing-impaired listeners and reduce listening effort for normal-hearing subjects.
2

Aldag, Nina, Andreas Büchner, Thomas Lenarz, and Waldo Nogueira. "Towards decoding selective attention through cochlear implant electrodes as sensors in subjects with contralateral acoustic hearing." Journal of Neural Engineering 19, no. 1 (February 1, 2022): 016023. http://dx.doi.org/10.1088/1741-2552/ac4de6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Objectives. Focusing attention on one speaker in a situation with multiple background speakers or noise is referred to as auditory selective attention. Decoding selective attention is an interesting line of research with respect to future brain-guided hearing aids or cochlear implants (CIs) that are designed to adaptively adjust sound processing through cortical feedback loops. This study investigates the feasibility of using the electrodes and backward telemetry of a CI to record electroencephalography (EEG). Approach. The study population included six normal-hearing (NH) listeners and five CI users with contralateral acoustic hearing. Cortical auditory evoked potentials (CAEP) and selective attention were recorded using a state-of-the-art high-density scalp EEG and, in the case of CI users, also using two CI electrodes as sensors in combination with the backward telemetry system of these devices, denoted as implant-based EEG (iEEG). Main results. In the selective attention paradigm with multi-channel scalp EEG the mean decoding accuracy across subjects was 94.8% and 94.6% for NH listeners and CI users, respectively. With single-channel scalp EEG the accuracy dropped but was above chance level in 8–9 out of 11 subjects, depending on the electrode montage. With the single-channel iEEG, the selective attention decoding accuracy could only be analyzed in two out of five CI users due to a loss of data in the other three subjects. In these two CI users, the selective attention decoding accuracy was above chance level. Significance. This study shows that single-channel EEG is suitable for auditory selective attention decoding, even though it reduces the decoding quality compared to a multi-channel approach. CI-based iEEG can be used for the purpose of recording CAEPs and decoding selective attention. However, the study also points out the need for further technical development for the CI backward telemetry regarding long-term recordings and the optimal sensor positions.
3

Geirnaert, Simon, Servaas Vandecappelle, Emina Alickovic, Alain de Cheveigne, Edmund Lalor, Bernd T. Meyer, Sina Miran, Tom Francart, and Alexander Bertrand. "Electroencephalography-Based Auditory Attention Decoding: Toward Neurosteered Hearing Devices." IEEE Signal Processing Magazine 38, no. 4 (July 2021): 89–102. http://dx.doi.org/10.1109/msp.2021.3075932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fu, Zhen, Xihong Wu, and Jing Chen. "Congruent audiovisual speech enhances auditory attention decoding with EEG." Journal of Neural Engineering 16, no. 6 (November 6, 2019): 066033. http://dx.doi.org/10.1088/1741-2552/ab4340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Straetmans, L., B. Holtze, S. Debener, M. Jaeger, and B. Mirkovic. "Neural tracking to go: auditory attention decoding and saliency detection with mobile EEG." Journal of Neural Engineering 18, no. 6 (December 1, 2021): 066054. http://dx.doi.org/10.1088/1741-2552/ac42b5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Objective. Neuro-steered assistive technologies have been suggested to offer a major advancement in future devices like neuro-steered hearing aids. Auditory attention decoding (AAD) methods would in that case allow for identification of an attended speaker within complex auditory environments, exclusively from neural data. Decoding the attended speaker using neural information has so far only been done in controlled laboratory settings. Yet, it is known that ever-present factors like distraction and movement are reflected in the neural signal parameters related to attention. Approach. Thus, in the current study we applied a two-competing speaker paradigm to investigate performance of a commonly applied electroencephalography-based AAD model outside of the laboratory during leisure walking and distraction. Unique environmental sounds were added to the auditory scene and served as distractor events. Main results. The current study shows, for the first time, that the attended speaker can be accurately decoded during natural movement. At a temporal resolution of as short as 5 s and without artifact attenuation, decoding was found to be significantly above chance level. Further, as hypothesized, we found a decrease in attention to the to-be-attended and the to-be-ignored speech stream after the occurrence of a salient event. Additionally, we demonstrate that it is possible to predict neural correlates of distraction with a computational model of auditory saliency based on acoustic features. Significance. Taken together, our study shows that auditory attention tracking outside of the laboratory in ecologically valid conditions is feasible and a step towards the development of future neural-steered hearing aids.
6

Facoetti, Andrea, Anna Noemi Trussardi, Milena Ruffino, Maria Luisa Lorusso, Carmen Cattaneo, Raffaella Galli, Massimo Molteni, and Marco Zorzi. "Multisensory Spatial Attention Deficits Are Predictive of Phonological Decoding Skills in Developmental Dyslexia." Journal of Cognitive Neuroscience 22, no. 5 (May 2010): 1011–25. http://dx.doi.org/10.1162/jocn.2009.21232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Although the dominant approach posits that developmental dyslexia arises from deficits in systems that are exclusively linguistic in nature (i.e., phonological deficit theory), dyslexics show a variety of lower level deficits in sensory and attentional processing. Although their link to the reading disorder remains contentious, recent empirical and computational studies suggest that spatial attention plays an important role in phonological decoding. The present behavioral study investigated exogenous spatial attention in dyslexic children and matched controls by measuring RTs to visual and auditory stimuli in cued-detection tasks. Dyslexics with poor nonword decoding accuracy showed a slower time course of visual and auditory (multisensory) spatial attention compared with both chronological age and reading level controls as well as compared with dyslexics with slow but accurate nonword decoding. Individual differences in the time course of multisensory spatial attention accounted for 31% of unique variance in the nonword reading performance of the entire dyslexic sample after controlling for age, IQ, and phonological skills. The present study suggests that multisensory “sluggish attention shifting”—related to a temporoparietal dysfunction—selectively impairs the sublexical mechanisms that are critical for reading development. These findings may offer a new approach for early identification and remediation of developmental dyslexia.
7

Xu, Zihao, Yanru Bai, Ran Zhao, Qi Zheng, Guangjian Ni, and Dong Ming. "Auditory attention decoding from EEG-based Mandarin speech envelope reconstruction." Hearing Research 422 (September 2022): 108552. http://dx.doi.org/10.1016/j.heares.2022.108552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aroudi, Ali, and Simon Doclo. "Cognitive-Driven Binaural Beamforming Using EEG-Based Auditory Attention Decoding." IEEE/ACM Transactions on Audio, Speech, and Language Processing 28 (2020): 862–75. http://dx.doi.org/10.1109/taslp.2020.2969779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aroudi, Ali, Eghart Fischer, Maja Serman, Henning Puder, and Simon Doclo. "Closed-Loop Cognitive-Driven Gain Control of Competing Sounds Using Auditory Attention Decoding." Algorithms 14, no. 10 (September 30, 2021): 287. http://dx.doi.org/10.3390/a14100287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent advances have shown that it is possible to identify the target speaker which a listener is attending to using single-trial EEG-based auditory attention decoding (AAD). Most AAD methods have been investigated for an open-loop scenario, where AAD is performed in an offline fashion without presenting online feedback to the listener. In this work, we aim at developing a closed-loop AAD system that allows to enhance a target speaker, suppress an interfering speaker and switch attention between both speakers. To this end, we propose a cognitive-driven adaptive gain controller (AGC) based on real-time AAD. Using the EEG responses of the listener and the speech signals of both speakers, the real-time AAD generates probabilistic attention measures, based on which the attended and the unattended speaker are identified. The AGC then amplifies the identified attended speaker and attenuates the identified unattended speaker, which are presented to the listener via loudspeakers. We investigate the performance of the proposed system in terms of the decoding performance and the signal-to-interference ratio (SIR) improvement. The experimental results show that, although there is a significant delay to detect attention switches, the proposed system is able to improve the SIR between the attended and the unattended speaker. In addition, no significant difference in decoding performance is observed between closed-loop AAD and open-loop AAD. The subjective evaluation results show that the proposed closed-loop cognitive-driven system demands a similar level of cognitive effort to follow the attended speaker, to ignore the unattended speaker and to switch attention between both speakers compared to using open-loop AAD. Closed-loop AAD in an online fashion is feasible and enables the listener to interact with the AGC.
10

Wang, Lei, Ed X. Wu, and Fei Chen. "EEG-based auditory attention decoding using speech-level-based segmented computational models." Journal of Neural Engineering 18, no. 4 (May 25, 2021): 046066. http://dx.doi.org/10.1088/1741-2552/abfeba.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Auditory attention decoding":

1

Aroudi, Ali [Verfasser]. "Cognitive-Driven Speech Enhancement using EEG-based Auditory Attention Decoding for Hearing Aid Applications / Ali Aroudi." München : Verlag Dr. Hut, 2021. http://d-nb.info/1232846716/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cantisani, Giorgia. "Neuro-steered music source separation." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans cette thèse, nous abordons le défi de l'utilisation d'interfaces cerveau-machine (ICM) sur l'application spécifique de la séparation de sources musicales qui vise à isoler les instruments individuels qui sont mélangés dans un enregistrement de musique. Ce problème a été étudié pendant des décennies, mais sans jamais considérer les ICM comme un moyen possible de guider et d'informer les systèmes de séparation. Plus précisément, nous avons étudié comment l'activité neuronale caractérisée par des signaux électroencéphalographiques (EEG) reflète des informations sur la source à laquelle on porte son attention et comment nous pouvons l'utiliser pour informer un système de séparation de sources.Tout d'abord, nous avons étudié le problème du décodage par l'EEG de l'attention auditive d'un instrument spécifique dans une pièce musicale polyphonique, en montrant que l'EEG suit les caractéristiques musicales pertinentes qui sont fortement corrélées avec la représentation temps-fréquence de la source à laquelle on porte l'attention et seulement faiblement corrélées avec les autres. Ensuite, nous avons exploité ce "contraste" pour informer un modèle de séparation de sources non supervisé basé sur une nouvelle variante de factorisation en matrices positives (NMF), appelée contrastive-NMF (C-NMF) et séparer automatiquement la source à laquelle on porte l'attention.La NMF non supervisée est une approche efficace dans de telles applications ne disposant pas ou peu de données d'apprentissage, comme c'est le cas dans des scénarios nécessitant des enregistrements EEG. En effet, les jeux de données EEG liés à la musique disponibles sont coûteux et longs à acquérir, ce qui exclut la possibilité d'aborder le problème par des approches d'apprentissage profond entièrement supervisées. Dans la dernière partie de la thèse, nous avons exploré des stratégies d'apprentissage alternatives. Plus précisément, nous avons étudié la possibilité d'adapter un modèle de séparation de sources de l'état de l'art à un mélange spécifique en utilisant les activations temporelles de sources dérivées de l'activité neuronale de l'utilisateur au moment du test. Cette approche peut être considérée comme étant " à adaptation unitaire" (one-shot), car l'adaptation agit uniquement sur une instance de chanson.Nous avons évalué les approches proposées sur les jeu de données MAD-EEG qui a été spécifiquement assemblé pour cette étude, obtenant des résultats encourageants, en particulier dans les cas difficiles où les modèles non informés sont mis à mal
In this PhD thesis, we address the challenge of integrating Brain-Computer Interfaces (BCI) and music technologies on the specific application of music source separation, which is the task of isolating individual sound sources that are mixed in the audio recording of a musical piece. This problem has been investigated for decades, but never considering BCI as a possible way to guide and inform separation systems. Specifically, we explored how the neural activity characterized by electroencephalographic signals (EEG) reflects information about the attended instrument and how we can use it to inform a source separation system.First, we studied the problem of EEG-based auditory attention decoding of a target instrument in polyphonic music, showing that the EEG tracks musically relevant features which are highly correlated with the time-frequency representation of the attended source and only weakly correlated with the unattended one. Second, we leveraged this ``contrast'' to inform an unsupervised source separation model based on a novel non-negative matrix factorisation (NMF) variant, named contrastive-NMF (C-NMF) and automatically separate the attended source.Unsupervised NMF represents a powerful approach in such applications with no or limited amounts of training data as when neural recording is involved. Indeed, the available music-related EEG datasets are still costly and time-consuming to acquire, precluding the possibility of tackling the problem with fully supervised deep learning approaches. Thus, in the last part of the thesis, we explored alternative learning strategies to alleviate this problem. Specifically, we propose to adapt a state-of-the-art music source separation model to a specific mixture using the time activations of the sources derived from the user's neural activity. This paradigm can be referred to as one-shot adaptation, as it acts on the target song instance only.We conducted an extensive evaluation of both the proposed system on the MAD-EEG dataset which was specifically assembled for this study obtaining encouraging results, especially in difficult cases where non-informed models struggle

Book chapters on the topic "Auditory attention decoding":

1

Nasrin, Fatema, Nafiz Ishtiaque Ahmed, and Muhammad Arifur Rahman. "Auditory Attention State Decoding for the Quiet and Hypothetical Environment: A Comparison Between bLSTM and SVM." In Advances in Intelligent Systems and Computing, 291–301. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-33-4673-4_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Geirnaert, Simon, Rob Zink, Tom Francart, and Alexander Bertrand. "Fast, Accurate, Unsupervised, and Time-Adaptive EEG-Based Auditory Attention Decoding for Neuro-steered Hearing Devices." In SpringerBriefs in Electrical and Computer Engineering, 29–40. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-49457-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Auditory attention decoding":

1

Wang, Liting, Xintao Hu, Meng Wang, Jinglei Lv, Junwei Han, Shijie Zhao, Qinglin Dong, Lei Guo, and Tianming Liu. "Decoding dynamic auditory attention during naturalistic experience." In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). IEEE, 2017. http://dx.doi.org/10.1109/isbi.2017.7950678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pallenberg, René, Ann-Katrin Griedelbach, and Alfred Mertins. "LSTMs for EEG-based Auditory Attention Decoding." In 2023 31st European Signal Processing Conference (EUSIPCO). IEEE, 2023. http://dx.doi.org/10.23919/eusipco58844.2023.10289779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Qiu, Zelin, Jianjun Gu, Dingding Yao, and Junfeng Li. "Exploring Auditory Attention Decoding using Speaker Features." In INTERSPEECH 2023. ISCA: ISCA, 2023. http://dx.doi.org/10.21437/interspeech.2023-414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alickovic, Emina, Carlos Francisco Mendoza, Andrew Segar, Maria Sandsten, and Martin A. Skoglund. "Decoding Auditory Attention From EEG Data Using Cepstral Analysis." In 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW). IEEE, 2023. http://dx.doi.org/10.1109/icasspw59220.2023.10193192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Aroudi, Ali, Daniel Marquardt, and Simon Daclo. "EEG-Based Auditory Attention Decoding Using Steerable Binaural Superdirective Beamformer." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8462278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aroudi, Ali, Marc Delcroix, Tomohiro Nakatani, Keisuke Kinoshita, Shoko Araki, and Simon Doclo. "Cognitive-Driven Convolutional Beamforming Using EEG-Based Auditory Attention Decoding." In 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2020. http://dx.doi.org/10.1109/mlsp49062.2020.9231657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Xiaoyu, Changde Du, Qiongyi Zhou, and Huiguang He. "Auditory Attention Decoding with Task-Related Multi-View Contrastive Learning." In MM '23: The 31st ACM International Conference on Multimedia. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3581783.3611869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Heintz, Nicolas, Simon Geirnaert, Tom Francart, and Alexander Bertrand. "Unbiased Unsupervised Stimulus Reconstruction for EEG-Based Auditory Attention Decoding." In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10096608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fu, Zhen, Bo Wang, Xihong Wu, and Jing Chen. "Auditory Attention Decoding from EEG using Convolutional Recurrent Neural Network." In 2021 29th European Signal Processing Conference (EUSIPCO). IEEE, 2021. http://dx.doi.org/10.23919/eusipco54536.2021.9616195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

An, Winko W., Alexander Pei, Abigail L. Noyce, and Barbara Shinn-Cunningham. "Decoding auditory attention from EEG using a convolutional neural network." In 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2021. http://dx.doi.org/10.1109/embc46164.2021.9630484.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography