Speaker-Independent Audio-Visual Speech Separation Based on Transformer in Multi-Talker Environments

Jing WANG*, Yiyu LUO, Weiming YI, Xiang XIE

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Speech separation is the task of extracting target speech while suppressing background interference components. In applications like video telephones, visual information about the target speaker is available, which can be leveraged for multi-speaker speech separation. Most previous multi-speaker separation methods are mainly based on convolutional or recurrent neural networks. Recently, Transformer-based Seq2Seq models have achieved state-of-the-art performance in various tasks, such as neural machine translation (NMT), automatic speech recognition (ASR), etc. Transformer has showed an advantage in modeling audio-visual temporal context by multi-head attention blocks through explicitly assigning attention weights. Besides, Transformer doesn't have any recurrent subnetworks, thus supporting parallelization of sequence computation. In this paper, we propose a novel speaker-independent audio-visual speech separation method based on Transformer, which can be flexibly applied to unknown number and identity of speakers. The model receives both audiovisual streams, including noisy spectrogram and speaker lip embeddings, and predicts a complex time-frequency mask for the corresponding target speaker. The model is made up by three main components: audio encoder, visual encoder and Transformer-based mask generator. Two different structures of encoders are investigated and compared, including ResNet-based and Transformer-based. The performance of the proposed method is evaluated in terms of source separation and speech quality metrics. The experimental results on the benchmark GRID dataset show the effectiveness of the method on speaker-independent separation task in multi-talker environments. The model generalizes well to unseen identities of speakers and noise types. Though only trained on 2-speaker mixtures, the model achieves reasonable performance when tested on 2-speaker and 3-speaker mixtures. Besides, the model still shows an advantage compared with previous audio-visual speech separation works.

Original languageEnglish
Pages (from-to)766-777
Number of pages12
JournalIEICE Transactions on Information and Systems
Volume105
Issue number4
DOIs
Publication statusPublished - 2022

Keywords

  • audio-visual speech separation
  • lip embedding
  • multi-head attention
  • multi-talker
  • time-frequency mask
  • transformer

Fingerprint

Dive into the research topics of 'Speaker-Independent Audio-Visual Speech Separation Based on Transformer in Multi-Talker Environments'. Together they form a unique fingerprint.

Cite this