融合音画同步的唇形合成研究

Translated title of the contribution: Lipsynthesis incorporating audio-visual synchronisation

Cong Jin, Jie Wang, Zichun Guo*, Jing Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

With the flourishing development of video-based information dissemination, audio and video synchronization is gradually becoming an important standard for measuring video quality. Deep synthesis technology has been entering the public's view in the international communication field, and lip-sync technology integrating audio and video synchronization has attracted more and more attention. The existing lip-synthesis models are mainly based on lip-synthesis of static images, which are not effective for synthesis of dynamic videos, and most of them use English datasets for training which results in poor synthesis of Chinese Mandarin. To address these problems, this paper conducted optimization experiments on the Wav2Lip lip synthesis model in Chinese context based on its research foundation, and tested the effect of different routes of training models through multiple sets of experiments, which provided important reference values for the subsequent Wav2Lip series research. This study realized lip synthesis from speech-driven to text-driven, discussed the application of lip synthesis in multiple fields such as virtual digital human, and laid the foundation for the broader application and development of lip synthesis technology.

Translated title of the contributionLipsynthesis incorporating audio-visual synchronisation
Original languageChinese (Traditional)
Pages (from-to)397-405
Number of pages9
JournalChinese Journal of Intelligent Science and Technology
Volume5
Issue number3
DOIs
Publication statusPublished - 15 Sept 2023

Fingerprint

Dive into the research topics of 'Lipsynthesis incorporating audio-visual synchronisation'. Together they form a unique fingerprint.

Cite this