摘要
Six-dimensional (6-D) object pose estimation plays a critical role in robotic grasp, which performs extensive usage in manufacturing. The current state-of-the-art pose estimation techniques primarily depend on matching keypoints. Typically, these methods establish a correspondence between 2-D keypoints in an image and the corresponding ones in a 3-D object model. And then they use the PnP-RANSAC algorithm to determine the 6-D pose of the object. However, this approach is not end-to-end trainable and may encounter difficulties when applied to scenarios necessitating differentiable poses. When employing a direct end-to-end regression method, the outcomes are often inferior. To tackle the mentioned problems, we present GR6D, which is a keypoint-and graph-convolution-based neural network for differentiable pose estimation based on RGB-D data. First, we propose a multiscale fusion method that utilizes convolution and graph convolution to exploit information contained in RGB and depth images. Additionally, we propose a transformer-based pose refinement module to further adjust features from RGB images and point clouds. We evaluate GR6D on three datasets: 1) LINEMOD; 2) occlusion LINEMOD; and 3) YCB-Video dataset, and it outperforms most state-of-the-art methods. Finally, we apply GR6D to pose estimation and the robotic grasping task in the real world, manifesting superior performance.
源语言 | 英语 |
---|---|
页(从-至) | 1-13 |
页数 | 13 |
期刊 | IEEE Transactions on Systems, Man, and Cybernetics: Systems |
DOI | |
出版状态 | 已接受/待刊 - 2024 |