Visual-Based Deep Reinforcement Learning for Robot Grasping with Pushing

Shufan Li*, Sheng Yu, Di Hua Zhai, Yuanqing Xia

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Robot grasping is a hot topic in the field of intelligent robot. The synergistic effect of robot grasping and pushing is of great help to manipulation tasks of robots. Someone proposed a model-free grasping method for robot based on deep reinforcement learning combined with a pushing and grasping network, which can determine action choices in pushing and grasping based on visual scene states and then obtain rewards. The network was trained through trial and error. Based on this method, we improve the model by adding feature fusion module and attention module to the deep Q-network, and train and test it in a simulated environment. The experiments show that our model has significantly improved grasping success rate and efficiency. Moreover, in complex unknown environments, the grasping performance of our model is also better than that of the original model.

源语言英语
主期刊名Proceedings - 2023 38th Youth Academic Annual Conference of Chinese Association of Automation, YAC 2023
出版商Institute of Electrical and Electronics Engineers Inc.
768-773
页数6
ISBN(电子版)9798350303636
DOI
出版状态已出版 - 2023
活动38th Youth Academic Annual Conference of Chinese Association of Automation, YAC 2023 - Hefei, 中国
期限: 27 8月 202329 8月 2023

出版系列

姓名Proceedings - 2023 38th Youth Academic Annual Conference of Chinese Association of Automation, YAC 2023

会议

会议38th Youth Academic Annual Conference of Chinese Association of Automation, YAC 2023
国家/地区中国
Hefei
时期27/08/2329/08/23

指纹

探究 'Visual-Based Deep Reinforcement Learning for Robot Grasping with Pushing' 的科研主题。它们共同构成独一无二的指纹。

引用此