TY - JOUR
T1 - Congruent audiovisual speech enhances auditory attention decoding with EEG
AU - Fu, Zhen
AU - Wu, Xihong
AU - Chen, Jing
N1 - Publisher Copyright:
© 2019 IOP Publishing Ltd.
PY - 2019/11/6
Y1 - 2019/11/6
N2 - Objective. The auditory attention decoding (AAD) approach can be used to determine the identity of the attended speaker during an auditory selective attention task, by analyzing measurements of electroencephalography (EEG) data. The AAD approach has the potential to guide the design of speech enhancement algorithms in hearing aids, i.e.To identify the speech stream of listener's interest so that hearing aids algorithms can amplify the target speech and attenuate other distracting sounds. This would consequently result in improved speech understanding and communication and reduced cognitive load, etc. The present work aimed to investigate whether additional visual input (i.e. lipreading) would enhance the AAD performance for normal-hearing listeners. Approach. In a two-Talker scenario, where auditory stimuli of audiobooks narrated by two speakers were presented, multi-channel EEG signals were recorded while participants were selectively attending to one speaker and ignoring the other one. Speakers' mouth movements were recorded during narrating for providing visual stimuli. Stimulus conditions included audio-only, visual input congruent with either (i.e. attended or unattended) speaker, and visual input incongruent with either speaker. The AAD approach was performed separately for each condition to evaluate the effect of additional visual input on AAD. Main results. Relative to the audio-only condition, the AAD performance was found improved by visual input only when it was congruent with the attended speech stream, and the improvement was about 14 percentage points on decoding accuracy. Cortical envelope tracking activities in both auditory and visual cortex were demonstrated stronger for the congruent audiovisual speech condition than other conditions. In addition, a higher AAD robustness was revealed for the congruent audiovisual condition, with reduced channel number and trial duration achieving higher accuracy than the audio-only condition. Significance. The present work complements previous studies and further manifests the feasibility of the AAD-guided design of hearing aids in daily face-To-face conversations. The present work also has a directive significance for designing a low-density EEG setup for the AAD approach.
AB - Objective. The auditory attention decoding (AAD) approach can be used to determine the identity of the attended speaker during an auditory selective attention task, by analyzing measurements of electroencephalography (EEG) data. The AAD approach has the potential to guide the design of speech enhancement algorithms in hearing aids, i.e.To identify the speech stream of listener's interest so that hearing aids algorithms can amplify the target speech and attenuate other distracting sounds. This would consequently result in improved speech understanding and communication and reduced cognitive load, etc. The present work aimed to investigate whether additional visual input (i.e. lipreading) would enhance the AAD performance for normal-hearing listeners. Approach. In a two-Talker scenario, where auditory stimuli of audiobooks narrated by two speakers were presented, multi-channel EEG signals were recorded while participants were selectively attending to one speaker and ignoring the other one. Speakers' mouth movements were recorded during narrating for providing visual stimuli. Stimulus conditions included audio-only, visual input congruent with either (i.e. attended or unattended) speaker, and visual input incongruent with either speaker. The AAD approach was performed separately for each condition to evaluate the effect of additional visual input on AAD. Main results. Relative to the audio-only condition, the AAD performance was found improved by visual input only when it was congruent with the attended speech stream, and the improvement was about 14 percentage points on decoding accuracy. Cortical envelope tracking activities in both auditory and visual cortex were demonstrated stronger for the congruent audiovisual speech condition than other conditions. In addition, a higher AAD robustness was revealed for the congruent audiovisual condition, with reduced channel number and trial duration achieving higher accuracy than the audio-only condition. Significance. The present work complements previous studies and further manifests the feasibility of the AAD-guided design of hearing aids in daily face-To-face conversations. The present work also has a directive significance for designing a low-density EEG setup for the AAD approach.
UR - http://www.scopus.com/inward/record.url?scp=85074620784&partnerID=8YFLogxK
U2 - 10.1088/1741-2552/ab4340
DO - 10.1088/1741-2552/ab4340
M3 - Article
C2 - 31505476
AN - SCOPUS:85074620784
SN - 1741-2560
VL - 16
JO - Journal of Neural Engineering
JF - Journal of Neural Engineering
IS - 6
M1 - 066033
ER -