TY - GEN
T1 - Tracking handheld object using three layer RGB-D image space
AU - Chaudhary, Krishneel
AU - Mae, Yasushi
AU - Kojima, Masaru
AU - Arai, Tatsuo
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/6/29
Y1 - 2015/6/29
N2 - Visual tracking of objects subjected to non-linear motion and appearance changes has shown to be a difficult task in computer vision. While research in visual object tracking has progressed significantly in terms of robust tracking of objects subjected to non-linear motion and appearance changes, these algorithms has shown limited capability for long term tracking of handheld objects during human-object interactions. The failure in tracking is a consequence of abrupt changes in the handheld object motion resulting in tracker drifting off the optimal object space. In this paper, we present a novel 3 layer RGB-D image model formulated with Bayesian filters that tracks handheld object using near constant velocity motion model. Our method divides the image into three layers of abstraction where each encodes visual information of environment, human, object and contributes toward precise localization of the handheld object during tracking. A boundary re-alignment step is introduced during tracking such that the tracker predicted object region is re-aligned to the optimal object region, therefore reducing the likelihood of tracker drifting off the object space. This compensation of the tracker prediction offset enables our algorithm to robustly track handheld object subjected to abrupt changes in motion during manipulation.
AB - Visual tracking of objects subjected to non-linear motion and appearance changes has shown to be a difficult task in computer vision. While research in visual object tracking has progressed significantly in terms of robust tracking of objects subjected to non-linear motion and appearance changes, these algorithms has shown limited capability for long term tracking of handheld objects during human-object interactions. The failure in tracking is a consequence of abrupt changes in the handheld object motion resulting in tracker drifting off the optimal object space. In this paper, we present a novel 3 layer RGB-D image model formulated with Bayesian filters that tracks handheld object using near constant velocity motion model. Our method divides the image into three layers of abstraction where each encodes visual information of environment, human, object and contributes toward precise localization of the handheld object during tracking. A boundary re-alignment step is introduced during tracking such that the tracker predicted object region is re-aligned to the optimal object region, therefore reducing the likelihood of tracker drifting off the object space. This compensation of the tracker prediction offset enables our algorithm to robustly track handheld object subjected to abrupt changes in motion during manipulation.
KW - Human-object interaction (HOI)
KW - Particle filters
KW - Robotic vision
KW - Visual object tracking
UR - http://www.scopus.com/inward/record.url?scp=84938257856&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2015.7139524
DO - 10.1109/ICRA.2015.7139524
M3 - Conference contribution
AN - SCOPUS:84938257856
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 2436
EP - 2441
BT - 2015 IEEE International Conference on Robotics and Automation, ICRA 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2015 IEEE International Conference on Robotics and Automation, ICRA 2015
Y2 - 26 May 2015 through 30 May 2015
ER -