Multi-sensor data fusion for optical tracking of head pose

Bin Luo, Yong Tian Wang*, Yue Liu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

6 引用 (Scopus)

摘要

Accurate head pose tracking is a key issue to accomplish precise registration in indoor augmented reality systems. This paper proposes a novel approach based on multi-sensor data fusion to achieve optical tracking of head pose with high accuracy. This approach employs two extended Kalman filters and one fusion filter for multi-sensor environment to fuse the pose data from two complement optical trackers, an inside-out tracking (IOT) with a camera and an outside-in tracking (OIT) with two cameras, respectively. The aim is to reduce the pose errors from the optical tracking sensors. A representative experimental setup is designed to verify the above approach. The experimental results show that, in the static state, the pose errors of IOT and OIT are consistent to the theoretical results obtained using the rules of error covariance matrix propagation from the respective image noises to the final pose errors, and that in the dynamic state, the proposed multi-sensor data fusion approach used with our combined optical tracker can achieve more accurate and more stable outputs of position and orientation compared with using a single IOT or OIT alone.

源语言英语
页(从-至)1239-1249
页数11
期刊Zidonghua Xuebao/Acta Automatica Sinica
36
9
DOI
出版状态已出版 - 9月 2010

指纹

探究 'Multi-sensor data fusion for optical tracking of head pose' 的科研主题。它们共同构成独一无二的指纹。

引用此