TY - JOUR
T1 - A Two-Stream CNN With Simultaneous Detection and Segmentation for Robotic Grasping
AU - Yu, Yingying
AU - Cao, Zhiqiang
AU - Liu, Zhicheng
AU - Geng, Wenjie
AU - Yu, Junzhi
AU - Zhang, Weimin
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2022/2/1
Y1 - 2022/2/1
N2 - The manipulating robots receive much attention by offering better services, where object grasping is still challenging especially under background interferences. In this article, a novel two-stream grasping convolutional neural network (CNN) with simultaneous detection and segmentation is proposed. The proposed method is cascaded by an improved simultaneous detection and segmentation network BlitzNet and a two-stream grasping CNN TsGNet. The improved BlitzNet introduces the channel-based attention mechanism, and achieves an improvement of detection accuracy and segmentation accuracy with the combination of the learning of multitask loss weightings and background suppression. Based on the obtained bounding box and the segmentation mask of the target object, the target object is separated from the background, and the corresponding depth map and grayscale map are sent to TsGNet. By adopting depthwise separable convolution and designed global deconvolution network, TsGNet achieves the best grasp detection with only a small amount of network parameters. This best grasp in the pixel coordinate system is converted to a desired 6-D pose for the robot, which drives the manipulator to execute grasping. The proposed method combines a grasping CNN with simultaneous detection and segmentation to achieve the best grasp with a good adaptability to background. With the Cornell grasping dataset, the image-wise accuracy and object-wise accuracy of the proposed TsGNet are 93.13% and 92.99%, respectively. The effectiveness of the proposed method is verified by the experiments.
AB - The manipulating robots receive much attention by offering better services, where object grasping is still challenging especially under background interferences. In this article, a novel two-stream grasping convolutional neural network (CNN) with simultaneous detection and segmentation is proposed. The proposed method is cascaded by an improved simultaneous detection and segmentation network BlitzNet and a two-stream grasping CNN TsGNet. The improved BlitzNet introduces the channel-based attention mechanism, and achieves an improvement of detection accuracy and segmentation accuracy with the combination of the learning of multitask loss weightings and background suppression. Based on the obtained bounding box and the segmentation mask of the target object, the target object is separated from the background, and the corresponding depth map and grayscale map are sent to TsGNet. By adopting depthwise separable convolution and designed global deconvolution network, TsGNet achieves the best grasp detection with only a small amount of network parameters. This best grasp in the pixel coordinate system is converted to a desired 6-D pose for the robot, which drives the manipulator to execute grasping. The proposed method combines a grasping CNN with simultaneous detection and segmentation to achieve the best grasp with a good adaptability to background. With the Cornell grasping dataset, the image-wise accuracy and object-wise accuracy of the proposed TsGNet are 93.13% and 92.99%, respectively. The effectiveness of the proposed method is verified by the experiments.
KW - Global deconvolution network (GDN)
KW - robotic grasping
KW - simultaneous detection and segmentation
KW - two-stream grasping convolutional neural network (CNN)
UR - http://www.scopus.com/inward/record.url?scp=85123675245&partnerID=8YFLogxK
U2 - 10.1109/TSMC.2020.3018757
DO - 10.1109/TSMC.2020.3018757
M3 - Article
AN - SCOPUS:85123675245
SN - 2168-2216
VL - 52
SP - 1167
EP - 1181
JO - IEEE Transactions on Systems, Man, and Cybernetics: Systems
JF - IEEE Transactions on Systems, Man, and Cybernetics: Systems
IS - 2
ER -