TY - GEN
T1 - Object search framework based on gaze interaction
AU - Ratsamee, Photchara
AU - Mae, Yasushi
AU - Kamiyama, Kazuto
AU - Horade, Mitsuhiro
AU - Kojima, Masaru
AU - Kiyokawa, Kiyoshi
AU - Mashita, Tomohiro
AU - Kuroda, Yoshihiro
AU - Takemura, Haruo
AU - Arai, Tatsuo
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015
Y1 - 2015
N2 - In this research, we present an object search framework using robot-gaze interaction that support patients with motor paralysis conditions. A patient can give commands by gazing to the target object and a robot starts to search autonomously. Apart from multiple gaze interaction, our approach uses few gaze interaction to specify location clue and object clue and thus integrates the RGB-D sensing to segment unknown objects from the environment. Based on hypotheses from gaze information, we utilize multiregion Graph Cuts method along with an analysis of depth information. Furthermore, our search algorithm allows a robot to find a main observation point which is the point where user can clearly observe the target object. If a first segmentation was not satisfied by the user, the robot is able to adapt its pose to find different views of object. The approach has been implemented and tested on the humanoid robot ENON. With a few gaze guidance, the success rate of segmentation of unknown objects was achieved to be 85%. The experimental results confirm its applicability on a wide variety of objects even when the target object was occluded by another object.
AB - In this research, we present an object search framework using robot-gaze interaction that support patients with motor paralysis conditions. A patient can give commands by gazing to the target object and a robot starts to search autonomously. Apart from multiple gaze interaction, our approach uses few gaze interaction to specify location clue and object clue and thus integrates the RGB-D sensing to segment unknown objects from the environment. Based on hypotheses from gaze information, we utilize multiregion Graph Cuts method along with an analysis of depth information. Furthermore, our search algorithm allows a robot to find a main observation point which is the point where user can clearly observe the target object. If a first segmentation was not satisfied by the user, the robot is able to adapt its pose to find different views of object. The approach has been implemented and tested on the humanoid robot ENON. With a few gaze guidance, the success rate of segmentation of unknown objects was achieved to be 85%. The experimental results confirm its applicability on a wide variety of objects even when the target object was occluded by another object.
UR - http://www.scopus.com/inward/record.url?scp=84964466954&partnerID=8YFLogxK
U2 - 10.1109/ROBIO.2015.7419066
DO - 10.1109/ROBIO.2015.7419066
M3 - Conference contribution
AN - SCOPUS:84964466954
T3 - 2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015
SP - 1997
EP - 2002
BT - 2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015
Y2 - 6 December 2015 through 9 December 2015
ER -