Tracking articulated hand underlying graphical model with depth cue

Tangli Liu*, Wei Liang, Xinxiao Wu, Lei Chen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Citations (Scopus)

Abstract

Visual tracking of articulated objects in real 3D space is challenging with applications in advanced human-computer interfaces and gesture semantic understanding. In this paper, graphical model, constructing articulated human hand, and NBP algorithm embedded with CAMSHIFT, inferencing hand configuration in 3D space, are applied for visual hand tracking. We also introduce image depth cue captured by two calibrated cameras together with color and edge as observation model. Depth cue allows our method to track human hand in comparatively accurate distance from cameras, especially in cluttered scenes with objects of similar color or edge. All those image cues are converted necessarily to probability distribution applied for the graphical model frame. The proposed method promotes the tracking efficiency and robustness proved by experiments and theory analysis.

Original languageEnglish
Title of host publicationProceedings - 1st International Congress on Image and Signal Processing, CISP 2008
Pages249-253
Number of pages5
DOIs
Publication statusPublished - 2008
Event1st International Congress on Image and Signal Processing, CISP 2008 - Sanya, Hainan, China
Duration: 27 May 200830 May 2008

Publication series

NameProceedings - 1st International Congress on Image and Signal Processing, CISP 2008
Volume4

Conference

Conference1st International Congress on Image and Signal Processing, CISP 2008
Country/TerritoryChina
CitySanya, Hainan
Period27/05/0830/05/08

Fingerprint

Dive into the research topics of 'Tracking articulated hand underlying graphical model with depth cue'. Together they form a unique fingerprint.

Cite this