Speech-driven 3D facial animation for mobile entertainment

Juan Yan*, Xiang Xie, Hao Hu

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper presents an entertainment-oriented application for mobile service, which generates customized speech-driven 3D facial animation and delivers to end-user by MMS (Multimedia Messaging Service). Some important methods of this application are discussed, including the 3D facial model based on 3 photos, the 3D facial animation driven by speech or text on-line and the video format transformer for most smart phones. The implementation shows the facial animation runs vividly and the system gets a positive feedback by subjects' evaluation.

Original languageEnglish
Pages (from-to)2334-2337
Number of pages4
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Publication statusPublished - 2008
EventINTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association - Brisbane, QLD, Australia
Duration: 22 Sept 200826 Sept 2008

Keywords

  • 3D facial animation
  • MMS
  • Smart phone application

Fingerprint

Dive into the research topics of 'Speech-driven 3D facial animation for mobile entertainment'. Together they form a unique fingerprint.

Cite this