TY - GEN
T1 - Stereovision-only based interactive mobile robot for human-robot face-to-face interaction
AU - Chen, Lei
AU - Dong, Zhen
AU - Gao, Sheng
AU - Yuan, Baofeng
AU - Pei, Mingtao
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/12/4
Y1 - 2014/12/4
N2 - In this paper, we present a stereovision-only based interactive mobile robot for supporting human-robot face-to-face interaction in the real world. A three-level architecture, which consists of sensor level, perception level and behavior level, is designed for the robot in order to perceive, understand and react to the human activity during interaction based only on visual information. A high performance stand-alone stereovision system (RGBD imager), developed in our lab, is applied to obtain the composite of color (RGB) images and dense disparity (D) maps at video rate. The RGBD imager allows the robot a human-like 3-D visual perception ability to (1) autonomously detect the human of interest whom the robot could interact with using the offline learning approaches, and (2) focus exclusively on the target human while both the human and the robot are moving during interaction using on-line learning approaches. We demonstrate and evaluate the performance of our interactive mobile robot in an office environment. The experimental results show that a reliable and dynamic face-to-face interaction is achieved, so that the target human face is always kept in the field of view and at a suitable social distance from the robot.
AB - In this paper, we present a stereovision-only based interactive mobile robot for supporting human-robot face-to-face interaction in the real world. A three-level architecture, which consists of sensor level, perception level and behavior level, is designed for the robot in order to perceive, understand and react to the human activity during interaction based only on visual information. A high performance stand-alone stereovision system (RGBD imager), developed in our lab, is applied to obtain the composite of color (RGB) images and dense disparity (D) maps at video rate. The RGBD imager allows the robot a human-like 3-D visual perception ability to (1) autonomously detect the human of interest whom the robot could interact with using the offline learning approaches, and (2) focus exclusively on the target human while both the human and the robot are moving during interaction using on-line learning approaches. We demonstrate and evaluate the performance of our interactive mobile robot in an office environment. The experimental results show that a reliable and dynamic face-to-face interaction is achieved, so that the target human face is always kept in the field of view and at a suitable social distance from the robot.
UR - http://www.scopus.com/inward/record.url?scp=84919934212&partnerID=8YFLogxK
U2 - 10.1109/ICPR.2014.322
DO - 10.1109/ICPR.2014.322
M3 - Conference contribution
AN - SCOPUS:84919934212
T3 - Proceedings - International Conference on Pattern Recognition
SP - 1840
EP - 1845
BT - Proceedings - International Conference on Pattern Recognition
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 22nd International Conference on Pattern Recognition, ICPR 2014
Y2 - 24 August 2014 through 28 August 2014
ER -