TY - GEN
T1 - Single-Channel Speech Separation Integrating Pitch Information Based on a Multi Task Learning Framework
AU - Li, Xiang
AU - Liu, Rui
AU - Song, Tao
AU - Wu, Xihong
AU - Chen, Jing
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - Pitch is a critical cue for speech separation in humans' auditory perception. Although the technology of tracking pitch in single-talker speech succeeds in many applications, it's still a challenging problem to extract pitch information from speech mixtures in machine perception. In this paper, we aimed to combine speech separation and pitch tracking together to let them benefit from each other. A multi-task learning framework was proposed, in which a unified objective that considered both speech separation and pitch tracking was used, based on the utterance-level permutation invariant training (uPIT) as well as deep clustering (DPCL). In such framework, two tasks were optimized simultaneously and could benefit from each other through the sharing layers in the networks. Experimental results indicated the proposed multi-task framework outperformed the corresponding single-task framework, in terms of both speech separation and pitch tracking. The improvement was more significant for challenging same-gender mixtures.
AB - Pitch is a critical cue for speech separation in humans' auditory perception. Although the technology of tracking pitch in single-talker speech succeeds in many applications, it's still a challenging problem to extract pitch information from speech mixtures in machine perception. In this paper, we aimed to combine speech separation and pitch tracking together to let them benefit from each other. A multi-task learning framework was proposed, in which a unified objective that considered both speech separation and pitch tracking was used, based on the utterance-level permutation invariant training (uPIT) as well as deep clustering (DPCL). In such framework, two tasks were optimized simultaneously and could benefit from each other through the sharing layers in the networks. Experimental results indicated the proposed multi-task framework outperformed the corresponding single-task framework, in terms of both speech separation and pitch tracking. The improvement was more significant for challenging same-gender mixtures.
KW - multi-pitch tracking
KW - multi-task learning
KW - permutation invariant training
KW - Speech separation
UR - http://www.scopus.com/inward/record.url?scp=85089239899&partnerID=8YFLogxK
U2 - 10.1109/ICASSP40776.2020.9053460
DO - 10.1109/ICASSP40776.2020.9053460
M3 - Conference contribution
AN - SCOPUS:85089239899
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 7279
EP - 7283
BT - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
Y2 - 4 May 2020 through 8 May 2020
ER -