TY - JOUR
T1 - SGNet
T2 - Robotic Suction Grasp Detection With Multiscale Attention
AU - Zhai, Di Hua
AU - Yu, Sheng
AU - Guan, Yuyin
AU - Xia, Yuanqing
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Suction gripping plays an important role in robot grasping tasks and is widely used in practical scenarios, such as object sorting, handling, and intelligent assembly in industrial settings. However, most existing suction grasping methods face challenges in achieving high efficiency and accuracy simultaneously. To tackle this issue, this article introduces a novel suction grasping detection network called SGNet. By utilizing a new segmentation network, SGNet can predict the grasping position, score, and object's center. These predictions are then mapped to the 3-D space using the point cloud data from the actual scene, enabling the determination of the final grasp pose. The key component of SGNet is the grasping position prediction, which directly influences the success rate and time efficiency of the grasping task. To address variations in object scale and focus on viable grasping positions, we introduce a multiscale attention module. This module enhances the fusion of multiscale information by fully integrating the features and assisting the neural network in accurately predicting the grasping position. Moreover, we propose a new valuation method for grasping position to further enhance accuracy and reliability. We evaluate the performance of SGNet on three datasets: SuctionNet-1Billion dataset, suction grasping dataset, and Dexnet 3.0 dataset. The results demonstrate the superiority of SGNet in terms of its effectiveness. Finally, we conduct experiments involving plane and bin picking on a real Baxter robot, attaining a higher success rate. These experiments validate the practical applicability of SGNet in real-world scenarios.
AB - Suction gripping plays an important role in robot grasping tasks and is widely used in practical scenarios, such as object sorting, handling, and intelligent assembly in industrial settings. However, most existing suction grasping methods face challenges in achieving high efficiency and accuracy simultaneously. To tackle this issue, this article introduces a novel suction grasping detection network called SGNet. By utilizing a new segmentation network, SGNet can predict the grasping position, score, and object's center. These predictions are then mapped to the 3-D space using the point cloud data from the actual scene, enabling the determination of the final grasp pose. The key component of SGNet is the grasping position prediction, which directly influences the success rate and time efficiency of the grasping task. To address variations in object scale and focus on viable grasping positions, we introduce a multiscale attention module. This module enhances the fusion of multiscale information by fully integrating the features and assisting the neural network in accurately predicting the grasping position. Moreover, we propose a new valuation method for grasping position to further enhance accuracy and reliability. We evaluate the performance of SGNet on three datasets: SuctionNet-1Billion dataset, suction grasping dataset, and Dexnet 3.0 dataset. The results demonstrate the superiority of SGNet in terms of its effectiveness. Finally, we conduct experiments involving plane and bin picking on a real Baxter robot, attaining a higher success rate. These experiments validate the practical applicability of SGNet in real-world scenarios.
KW - Multiscale information
KW - object detection
KW - robot
KW - suction grasp
UR - http://www.scopus.com/inward/record.url?scp=85219304270&partnerID=8YFLogxK
U2 - 10.1109/TMECH.2025.3538093
DO - 10.1109/TMECH.2025.3538093
M3 - Article
AN - SCOPUS:85219304270
SN - 1083-4435
JO - IEEE/ASME Transactions on Mechatronics
JF - IEEE/ASME Transactions on Mechatronics
ER -