A RGB-D based 6D Object Pose Estimation and Its Application in Robotic Grasping

Sheng Yu, Di Hua Zhai*, Haoran Wu, Jun Liao, Yuanqing Xia

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Pose estimation of objects is critical to robotic grasping. Local optimization approach has been widely used to minimize the distance of the point pairs to estimate the 6D pose, which, however, is time-consuming and low-accuracy. To conquer this problem, a novel and time-efficient 6D object pose estimation neural network, PoseNet, is proposed in this paper. The input of PoseNet is the RGB-D image and a novel fusion network with channel attention mechanism is used to extract data. The random-sample-consensus-based voting method and rotation anchors are developed to predict, respectively, the translation of object and the rotation of object. The performance evaluation on the YCB-Video dataset show that the real-time inference and high accuracy are guaranteed. The proposed method is also demonstrated by a practical robotic grasping system, where the experiment video is avaliable at https://www.bilibili.com/video/BV1qf4y1s7in.

Original languageEnglish
Title of host publicationProceeding - 2021 China Automation Congress, CAC 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5953-5958
Number of pages6
ISBN (Electronic)9781665426473
DOIs
Publication statusPublished - 2021
Event2021 China Automation Congress, CAC 2021 - Beijing, China
Duration: 22 Oct 202124 Oct 2021

Publication series

NameProceeding - 2021 China Automation Congress, CAC 2021

Conference

Conference2021 China Automation Congress, CAC 2021
Country/TerritoryChina
CityBeijing
Period22/10/2124/10/21

Keywords

  • Channel Attention
  • Instance Segmentation
  • Pose Estimation
  • Robotic Grasping
  • Rotation Anchors

Fingerprint

Dive into the research topics of 'A RGB-D based 6D Object Pose Estimation and Its Application in Robotic Grasping'. Together they form a unique fingerprint.

Cite this