A self-supervised model for language identification integrating phonological knowledge

Qingran Zhan, Xiang Xie*, Chenguang Hu, Haobo Cheng

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

In this paper, a self-supervised learning pre-trained model is proposed and successfully applied in language identification task (LID). A Transformer encoder is employed and multi-task strategy is used to train the self-supervised model: the first task is to reconstruct the masking spans of input frames and the second task is a supervision task where the phoneme and phonological labels are used with Connectionist Temporal Classification (CTC) loss. By using this multi-task learning loss, the model is expected to capture high-level speech representation in phonological space. Meanwhile, an adaptive loss is also applied for multi-task learning to balance the weight between different tasks. After the pretraining stage, the self-supervised model is used for xvector systems. Our LID experiments are carried out on the oriental language recognition (OLR) challenge data corpus and 1 s, 3 s, Full-length test sets are selected. Experimental results show that on 1 s test set, feature extraction model approach can get best performance and in 3 s, Full-length test, the fine-tuning approach can reach the best performance. Furthermore, our results prove that the multi-task training strategy is effective and the proposed model can get the best performance.

Original languageEnglish
Article number2259
JournalElectronics (Switzerland)
Volume10
Issue number18
DOIs
Publication statusPublished - Sept 2021

Keywords

  • Language identification
  • Phonological knowledge
  • Self-supervised learning

Fingerprint

Dive into the research topics of 'A self-supervised model for language identification integrating phonological knowledge'. Together they form a unique fingerprint.

Cite this