Visuomotor Policy Learning for Task Automation of Surgical Robot

Junhui Huang, Qingxin Shi, Dongsheng Xie, Yiming Ma, Xiaoming Liu, Changsheng Li*, Xingguang Duan*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

With the increasing adoption of robotic surgery systems, the need for automated surgical tasks has become more pressing. Recent learning-based approaches provide solutions to surgical automation but typically rely on low-dimensional observations. To further imitate the actions of surgeons in an end-to-end paradigm, this paper introduces a novel visual-based approach to automating surgical tasks using generative imitation learning for robotic systems. We develop a hybrid model integrating state space models transformer, and conditional variational autoencoders (CVAE) to enhance performance and generalization called ACMT. The proposed model, leveraging the Mamba block and multi-head cross-attention mechanisms for sequential modeling, achieves a 75-100% success rate with just 100 demonstrations for most of the tasks. This work significantly advances data-driven automation in surgical robotics, aiming to alleviate the burden on surgeons and improve surgical outcomes.

源语言英语
页(从-至)1448-1457
页数10
期刊IEEE Transactions on Medical Robotics and Bionics
6
4
DOI
出版状态已出版 - 2024

指纹

探究 'Visuomotor Policy Learning for Task Automation of Surgical Robot' 的科研主题。它们共同构成独一无二的指纹。

引用此