MOS PREDICTOR FOR SYNTHETIC SPEECH WITH I-VECTOR INPUTS

Miao Liu, Jing Wang, Shicong Li, Fei Xiang, Yue Yao, Lidong Yang

科研成果: 书/报告/会议事项章节会议稿件同行评审

2 引用 (Scopus)

摘要

Based on deep learning technology, non-intrusive methods have received increasing attention for synthetic speech quality assessment since it does not need reference signals. Meanwhile, i-vector has been widely used in paralinguistic speech attribute recognition such as speaker and emotion recognition, but few studies have used it to estimate speech quality. In this paper, we propose a neural-network-based model that splices the deep features extracted by convolutional neural network (CNN) and i-vector on the time axis and uses Transformer encoder as time sequence model. To evaluate the proposed method, we improve the previous prediction models and conduct experiments on Voice Conversion Challenge (VCC) 2018 and 2016 dataset. Results show that i-vector contains information very related to the quality of synthetic speech and the proposed models that utilize i-vector and Transformer encoder highly increase the accuracy of MOSNet and MBNet on both utterance-level and system-level results.

源语言英语
主期刊名2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings
出版商Institute of Electrical and Electronics Engineers Inc.
906-910
页数5
ISBN(电子版)9781665405409
DOI
出版状态已出版 - 2022
活动47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Virtual, Online, 新加坡
期限: 23 5月 202227 5月 2022

出版系列

姓名ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
2022-May
ISSN(印刷版)1520-6149

会议

会议47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
国家/地区新加坡
Virtual, Online
时期23/05/2227/05/22

指纹

探究 'MOS PREDICTOR FOR SYNTHETIC SPEECH WITH I-VECTOR INPUTS' 的科研主题。它们共同构成独一无二的指纹。

引用此