TY - JOUR
T1 - EGNet
T2 - Efficient Robotic Grasp Detection Network
AU - Yu, Sheng
AU - Zhai, Di Hua
AU - Xia, Yuanqing
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2023/4/1
Y1 - 2023/4/1
N2 - In this article, a novel grasp detection network, efficient grasp detection network (EGNet), is proposed to deal with the grasp challenging in stacked scenes, which complete the tasks of the object detection, grasp detection, and manipulation relationship detection. On the object detection, the EGNet takes the idea from the EfficientDet, and some hyperparameters are modified to help the robot complete the task of object detection and classification. In the part of grasping detection, a novel grasp detection module is proposed, which takes the feature map from bidirectional feature pyramid network (BiFPN) as input, and outputs the grasp position and its quality score. In the part of manipulation relation analysis, it takes the feature map from BiFPN, object detection, and the grasp detection, and outputs the best grasping position and appropriate manipulation relationship. The EGNet is trained and tested on the visual manipulation relationship dataset and Cornell dataset, and the detection accuracy are 87.1% and 98.9%, respectively. Finally, the EGNet is also tested in the practical scene by a grasp experiment on the Baxter robot. The grasp experiment is performed in the cluttered and stacked scene, and gets the success rate of 93.6% and 69.6%, respectively.
AB - In this article, a novel grasp detection network, efficient grasp detection network (EGNet), is proposed to deal with the grasp challenging in stacked scenes, which complete the tasks of the object detection, grasp detection, and manipulation relationship detection. On the object detection, the EGNet takes the idea from the EfficientDet, and some hyperparameters are modified to help the robot complete the task of object detection and classification. In the part of grasping detection, a novel grasp detection module is proposed, which takes the feature map from bidirectional feature pyramid network (BiFPN) as input, and outputs the grasp position and its quality score. In the part of manipulation relation analysis, it takes the feature map from BiFPN, object detection, and the grasp detection, and outputs the best grasping position and appropriate manipulation relationship. The EGNet is trained and tested on the visual manipulation relationship dataset and Cornell dataset, and the detection accuracy are 87.1% and 98.9%, respectively. Finally, the EGNet is also tested in the practical scene by a grasp experiment on the Baxter robot. The grasp experiment is performed in the cluttered and stacked scene, and gets the success rate of 93.6% and 69.6%, respectively.
KW - Grasping detection
KW - manipulation relationship detection
KW - object detection
KW - robot
UR - http://www.scopus.com/inward/record.url?scp=85130447856&partnerID=8YFLogxK
U2 - 10.1109/TIE.2022.3174274
DO - 10.1109/TIE.2022.3174274
M3 - Article
AN - SCOPUS:85130447856
SN - 0278-0046
VL - 70
SP - 4058
EP - 4067
JO - IEEE Transactions on Industrial Electronics
JF - IEEE Transactions on Industrial Electronics
IS - 4
ER -