A grasping CNN with image segmentation for mobile manipulating robot

Yingying Yu, Zhiqiang Cao, Shuang Liang, Zhicheng Liu, Junzhi Yu, Xuechao Chen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Citations (Scopus)

Abstract

This paper presents a grasping convolutional neural network with image segmentation for mobile manipulating robot. The proposed method is cascaded by a feature pyramid network FPN and a grasping network DrGNet. The FPN network combined with point cloud clustering is used to obtain the mask of the target object. Then, the grayscale map and the depth map corresponding to the target object are combined and sent to the DrGNet network for providing multi-scale images. On this basis, depthwise separable convolution is used for encoding. The results of encoders are refined according to the light-weight RefineNet as well as sSE, which can achieve a better grasp detection. The proposed method is verified by the experiments on mobile manipulating robot.

Original languageEnglish
Title of host publicationIEEE International Conference on Robotics and Biomimetics, ROBIO 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1688-1692
Number of pages5
ISBN (Electronic)9781728163215
DOIs
Publication statusPublished - Dec 2019
Event2019 IEEE International Conference on Robotics and Biomimetics, ROBIO 2019 - Dali, China
Duration: 6 Dec 20198 Dec 2019

Publication series

NameIEEE International Conference on Robotics and Biomimetics, ROBIO 2019

Conference

Conference2019 IEEE International Conference on Robotics and Biomimetics, ROBIO 2019
Country/TerritoryChina
CityDali
Period6/12/198/12/19

Keywords

  • Grasping CNN
  • Image segmentation
  • Mobile manipulating robot
  • Robotic grasping

Fingerprint

Dive into the research topics of 'A grasping CNN with image segmentation for mobile manipulating robot'. Together they form a unique fingerprint.

Cite this