Abstract
Accurate head pose tracking is a key issue to accomplish precise registration in indoor augmented reality systems. This paper proposes a novel approach based on multi-sensor data fusion to achieve optical tracking of head pose with high accuracy. This approach employs two extended Kalman filters and one fusion filter for multi-sensor environment to fuse the pose data from two complement optical trackers, an inside-out tracking (IOT) with a camera and an outside-in tracking (OIT) with two cameras, respectively. The aim is to reduce the pose errors from the optical tracking sensors. A representative experimental setup is designed to verify the above approach. The experimental results show that, in the static state, the pose errors of IOT and OIT are consistent to the theoretical results obtained using the rules of error covariance matrix propagation from the respective image noises to the final pose errors, and that in the dynamic state, the proposed multi-sensor data fusion approach used with our combined optical tracker can achieve more accurate and more stable outputs of position and orientation compared with using a single IOT or OIT alone.
Original language | English |
---|---|
Pages (from-to) | 1239-1249 |
Number of pages | 11 |
Journal | Zidonghua Xuebao/Acta Automatica Sinica |
Volume | 36 |
Issue number | 9 |
DOIs | |
Publication status | Published - Sept 2010 |
Keywords
- Augmented reality
- Error covariance matrix
- Head pose tracking
- Multi-sensor data fusion