TY - GEN
T1 - Vision Perception-based Adaptive Pushing Assisted Grasping Network for Dense Clutters
AU - Liu, Xinqi
AU - Chai, Runqi
AU - Wang, Shuo
AU - Chai, Senchun
AU - Xia, Yuanqing
N1 - Publisher Copyright:
© 2024 Technical Committee on Control Theory, Chinese Association of Automation.
PY - 2024
Y1 - 2024
N2 - During the execution of a robotic grasping task, the task may fail due to the close proximity of multiple objects if grasping is the only motion primitive. Non-prehensile manipulations, such as pushing, can be used to rearrange objects and benefit grasping. Varying pushing actions with different speeds, distances, and routines may result in better performance. In this study, we propose a vision perception-based Adaptive Pushing Assisted Grasping Network (APAGN) system for generating a sequence of actions that includes grasping and adaptive pushing. APAGN can perceive the scene and then predict the locations of objects after an adaptive push, which adjusts the force and direction of pushing based on expected performance. To achieve a more efficient calculation, an Action Selector of APAGN is designed to choose the object with the highest expected outcome before making a prediction. The value of pushing actions is estimated based on how they benefit grasping, which breaks the limitation of manually designed rewards. Simulations show that APAGN might achieve higher action efficiency than baseline methods, especially in cluttered environments.
AB - During the execution of a robotic grasping task, the task may fail due to the close proximity of multiple objects if grasping is the only motion primitive. Non-prehensile manipulations, such as pushing, can be used to rearrange objects and benefit grasping. Varying pushing actions with different speeds, distances, and routines may result in better performance. In this study, we propose a vision perception-based Adaptive Pushing Assisted Grasping Network (APAGN) system for generating a sequence of actions that includes grasping and adaptive pushing. APAGN can perceive the scene and then predict the locations of objects after an adaptive push, which adjusts the force and direction of pushing based on expected performance. To achieve a more efficient calculation, an Action Selector of APAGN is designed to choose the object with the highest expected outcome before making a prediction. The value of pushing actions is estimated based on how they benefit grasping, which breaks the limitation of manually designed rewards. Simulations show that APAGN might achieve higher action efficiency than baseline methods, especially in cluttered environments.
KW - Big Data in Robotics and Automation
KW - Reinforcement Learning
KW - Robotics Control
KW - Vision Perception
UR - http://www.scopus.com/inward/record.url?scp=85205478466&partnerID=8YFLogxK
U2 - 10.23919/CCC63176.2024.10662371
DO - 10.23919/CCC63176.2024.10662371
M3 - Conference contribution
AN - SCOPUS:85205478466
T3 - Chinese Control Conference, CCC
SP - 8411
EP - 8416
BT - Proceedings of the 43rd Chinese Control Conference, CCC 2024
A2 - Na, Jing
A2 - Sun, Jian
PB - IEEE Computer Society
T2 - 43rd Chinese Control Conference, CCC 2024
Y2 - 28 July 2024 through 31 July 2024
ER -