Rendering of virtual human based on video sequence

Ge Huai*, Yue Liu, Dongdong Weng

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

To render the virtual human more realistically, a method based on video-sequence to get a virtual human is proposed. Every fragment of conversation is converted into a video according to the given transcript and the virtual human is rendered by playing the videos in order. Firstly, all the facial models are got by the 3D scanner and the models are clipped to get the mesh with smooth edge and the texture with transparent channel. As the models are ready, the position of the face is estimated in the picture and the selected models are embedded into the picture. Then, the transition between any adjacent frames is obtained by image morphing and all the image sequences are converted into a video. When playing the video, a virtual human like a real person can be seen to talk with us. Finally, the feasibility of the method is verified, and the effect of rendering is analyzed with a series of experiments, including the subjective evaluation and the calculation of matching between the real and composite pictures.

Original languageEnglish
Pages (from-to)296-300
Number of pages5
JournalGuangxue Jishu/Optical Technique
Volume41
Issue number4
Publication statusPublished - 1 Jul 2015

Keywords

  • Face modeling
  • Image morphing
  • Video-sequence
  • Virtual human

Fingerprint

Dive into the research topics of 'Rendering of virtual human based on video sequence'. Together they form a unique fingerprint.

Cite this