3D model based expression tracking in intrinsic expression space

Qiang Wang*, Haizhou Ai, Guangyou Xu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

In this paper, a novel method of learning the intrinsic facial expression space for expression tracking is proposed. First, a partial 3D face model is constructed from a trinocular image and the expression space is parameterized using MPEG4 FAP. Then an algorithm of learning the intrinsic expression space from the parameterized FAP space is derived. The resulted intrinsic expression space reduces even to 5 dimensions. We will show that the obtained expression space is superior to the space obtained by PCA. Then the dynamical model is derived and trained on this intrinsic expression space. Finally, the learned tracker is developed in a particle-filter-style tracking framework. Experiments on both synthetic and real videos show that the learned tracker performs stably over a long sequence and the results are encouraging.

Original languageEnglish
Title of host publicationProceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004
Pages487-492
Number of pages6
Publication statusPublished - 2004
Externally publishedYes
EventProceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004 - Seoul, Korea, Republic of
Duration: 17 May 200419 May 2004

Publication series

NameProceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition

Conference

ConferenceProceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004
Country/TerritoryKorea, Republic of
CitySeoul
Period17/05/0419/05/04

Fingerprint

Dive into the research topics of '3D model based expression tracking in intrinsic expression space'. Together they form a unique fingerprint.

Cite this