EEG-based auditory attention decoding with audiovisual speech for hearing-impaired listeners

Bo Wang, Xiran Xu, Yadong Niu, Chao Wu, Xihong Wu, Jing Chen*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Auditory attention decoding (AAD) was used to determine the attended speaker during an auditory selective attention task. However, the auditory factors modulating AAD remained unclear for hearing-impaired (HI) listeners. In this study, scalp electroencephalogram (EEG) was recorded with an auditory selective attention paradigm, in which HI listeners were instructed to attend one of the two simultaneous speech streams with or without congruent visual input (articulation movements), and at a high or low target-to-masker ratio (TMR). Meanwhile, behavioral hearing tests (i.e. audiogram, speech reception threshold, temporal modulation transfer function) were used to assess listeners’ individual auditory abilities. The results showed that both visual input and increasing TMR could significantly enhance the cortical tracking of the attended speech and AAD accuracy. Further analysis revealed that the audiovisual (AV) gain in attended speech cortical tracking was significantly correlated with listeners’ auditory amplitude modulation (AM) sensitivity, and the TMR gain in attended speech cortical tracking was significantly correlated with listeners’ hearing thresholds. Temporal response function analysis revealed that subjects with higher AM sensitivity demonstrated more AV gain over the right occipitotemporal and bilateral frontocentral scalp electrodes.

Original languageEnglish
Pages (from-to)10972-10983
Number of pages12
JournalCerebral Cortex
Volume33
Issue number22
DOIs
Publication statusPublished - 15 Nov 2023
Externally publishedYes

Keywords

  • audiovisual speech
  • auditory attention decoding
  • EEG
  • hearing impairment
  • speech-in-noise

Fingerprint

Dive into the research topics of 'EEG-based auditory attention decoding with audiovisual speech for hearing-impaired listeners'. Together they form a unique fingerprint.

Cite this