A multiscale chaotic feature extraction method for speaker recognition

Jiang Lin*, Yi Yumei, Zhang Maosheng, Chen Defeng, Wang Chao, Wang Tonghan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

In speaker recognition systems, feature extraction is a challenging task under environment noise conditions. To improve the robustness of the feature, we proposed a multiscale chaotic feature for speaker recognition. We use a multiresolution analysis technique to capture more finer information on different speakers in the frequency domain. Then, we extracted the speech chaotic characteristics based on the nonlinear dynamic model, which helps to improve the discrimination of features. Finally, we use a GMM-UBM model to develop a speaker recognition system. Our experimental results verified its good performance. Under clean speech and noise speech conditions, the ERR value of our method is reduced by 13.94% and 26.5% compared with the state-of-theart method, respectively.

Original languageEnglish
Article number8810901
JournalComplexity
Volume2020
DOIs
Publication statusPublished - 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'A multiscale chaotic feature extraction method for speaker recognition'. Together they form a unique fingerprint.

Cite this