Comic-guided speech synthesis

Yujia Wang, Wenguan Wang, Wei Liang*, Lap Fai Yu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

18 Citations (Scopus)

Abstract

We introduce a novel approach for synthesizing realistic speeches for comics. Using a comic page as input, our approach synthesizes speeches for each comic character following the reading flow. It adopts a cascading strategy to synthesize speeches in two stages: Comic Visual Analysis and Comic Speech Synthesis. In the first stage, the input comic page is analyzed to identify the gender and age of the characters, as well as texts each character speaks and corresponding emotion. Guided by this analysis, in the second stage, our approach synthesizes realistic speeches for each character, which are consistent with the visual observations. Our experiments show that the proposed approach can synthesize realistic and lively speeches for different types of comics. Perceptual studies performed on the synthesis results of multiple sample comics validate the efficacy of our approach.

Original languageEnglish
Article number187
JournalACM Transactions on Graphics
Volume38
Issue number6
DOIs
Publication statusPublished - Nov 2019

Keywords

  • Comics
  • Deep learning
  • Speech synthesis

Fingerprint

Dive into the research topics of 'Comic-guided speech synthesis'. Together they form a unique fingerprint.

Cite this