Visuomotor Policy Learning for Task Automation of Surgical Robot

Junhui Huang, Qingxin Shi, Dongsheng Xie, Yiming Ma, Xiaoming Liu, Changsheng Li*, Xingguang Duan*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

With the increasing adoption of robotic surgery systems, the need for automated surgical tasks has become more pressing. Recent learning-based approaches provide solutions to surgical automation but typically rely on low-dimensional observations. To further imitate the actions of surgeons in an end-to-end paradigm, this paper introduces a novel visual-based approach to automating surgical tasks using generative imitation learning for robotic systems. We develop a hybrid model integrating state space models transformer, and conditional variational autoencoders (CVAE) to enhance performance and generalization called ACMT. The proposed model, leveraging the Mamba block and multi-head cross-attention mechanisms for sequential modeling, achieves a 75-100% success rate with just 100 demonstrations for most of the tasks. This work significantly advances data-driven automation in surgical robotics, aiming to alleviate the burden on surgeons and improve surgical outcomes.

Original languageEnglish
Pages (from-to)1448-1457
Number of pages10
JournalIEEE Transactions on Medical Robotics and Bionics
Volume6
Issue number4
DOIs
Publication statusPublished - 2024

Keywords

  • Surgical robots
  • imitation learning
  • surgical task automation

Fingerprint

Dive into the research topics of 'Visuomotor Policy Learning for Task Automation of Surgical Robot'. Together they form a unique fingerprint.

Cite this

Huang, J., Shi, Q., Xie, D., Ma, Y., Liu, X., Li, C., & Duan, X. (2024). Visuomotor Policy Learning for Task Automation of Surgical Robot. IEEE Transactions on Medical Robotics and Bionics, 6(4), 1448-1457. https://doi.org/10.1109/TMRB.2024.3464090