TY - GEN
T1 - Language Guided Robotic Grasping with Fine-Grained Instructions
AU - Sun, Qiang
AU - Lin, Haitao
AU - Fu, Ying
AU - Fu, Yanwei
AU - Xue, Xiangyang
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Given a single RGB image and the attribute-rich language instructions, this paper investigates the novel problem of using Fine-grained instructions for the Language guided robotic Grasping (FLarG). This problem is made challenging by learning fine-grained language descriptions to ground target objects. Recent advances have been made in visually grounding the objects simply by several coarse attributes [1]. However, these methods have poor performance as they cannot well align the multi-modal features, and do not make the best of recent powerful large pre-trained vision and language models, e.g., CLIP. To this end, this paper proposes a FLarG pipeline including stages of CLIP-guided object localization, and 6-DoF category-level object pose estimation for grasping. Specially, we first take the CLIP-based segmentation model CRIS as the backbone and propose an end-to-end DyCRIS model that uses a novel dynamic mask strategy to well fuse the multi-level language and vision features. Then, the well-trained instance segmentation backbone Mask R-CNN is adopted to further improve the predicted mask of our DyCRIS. Finally, the target object pose is inferred for the robotics grasping by using the recent 6-DoF object pose estimation method. To validate our CLIP-enhanced pipeline, we also construct a validation dataset for our FLarG task and name it RefNOCS. Extensive results on RefNOCS have shown the utility and effectiveness of our proposed method. The project homepage is available at https://sunqiang85.github.ioIFLarG/.
AB - Given a single RGB image and the attribute-rich language instructions, this paper investigates the novel problem of using Fine-grained instructions for the Language guided robotic Grasping (FLarG). This problem is made challenging by learning fine-grained language descriptions to ground target objects. Recent advances have been made in visually grounding the objects simply by several coarse attributes [1]. However, these methods have poor performance as they cannot well align the multi-modal features, and do not make the best of recent powerful large pre-trained vision and language models, e.g., CLIP. To this end, this paper proposes a FLarG pipeline including stages of CLIP-guided object localization, and 6-DoF category-level object pose estimation for grasping. Specially, we first take the CLIP-based segmentation model CRIS as the backbone and propose an end-to-end DyCRIS model that uses a novel dynamic mask strategy to well fuse the multi-level language and vision features. Then, the well-trained instance segmentation backbone Mask R-CNN is adopted to further improve the predicted mask of our DyCRIS. Finally, the target object pose is inferred for the robotics grasping by using the recent 6-DoF object pose estimation method. To validate our CLIP-enhanced pipeline, we also construct a validation dataset for our FLarG task and name it RefNOCS. Extensive results on RefNOCS have shown the utility and effectiveness of our proposed method. The project homepage is available at https://sunqiang85.github.ioIFLarG/.
UR - http://www.scopus.com/inward/record.url?scp=85182523132&partnerID=8YFLogxK
U2 - 10.1109/IROS55552.2023.10342331
DO - 10.1109/IROS55552.2023.10342331
M3 - Conference contribution
AN - SCOPUS:85182523132
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 1319
EP - 1326
BT - 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023
Y2 - 1 October 2023 through 5 October 2023
ER -