Structured Local Feature-Conditioned 6-DOF Variational Grasp Detection Network in Cluttered Scenes

Hongyang Liu, Hui Li, Changhua Jiang, Shuqi Xue, Yan Zhao, Xiao Huang*, Zhihong Jiang*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

One of the most crucial abilities for robots is to grasp objects accurately in cluttered scenes. This article proposes a structured local feature-conditioned 6-DOF variational grasp detection network (LF-GraspNet) that can generate accurate grasp configurations in cluttered scenes end to end. First, we propose a network using a 3-D convolutional neural network with a conditional variational autoencoder (CVAE) as a backbone. The explorability of the VAE enhances the network's generalizability in grasp detection. Second, we jointly encode the truncated signed distance function (TSDF) of the scene and successful grasp configurations into the global feature as the prior of the latent space of the CVAE. The structured local feature of the TSDF volume is used as the condition of the CVAE, which can then skillfully fuse different modalities and scales of features. Simulation and real-world grasp experiments demonstrate that LF-GraspNet, trained on a grasp dataset with a limited number of primitive objects, achieves better success rates and declutter rates for unseen objects in cluttered scenes than baseline methods. Specifically, in real-world grasp experiments, LF-GraspNet achieves stable grasping of objects in cluttered scenes with single-view and multiview depth images as input, demonstrating its excellent grasp performance and generalization ability from simple primitive objects to complex and unseen objects.

源语言英语
期刊IEEE/ASME Transactions on Mechatronics
DOI
出版状态已接受/待刊 - 2024

引用此