Visuomotor Policy Learning for Task Automation of Surgical Robot

Junhui Huang, Qingxin Shi, Dongsheng Xie, Yiming Ma, Xiaoming Liu, Changsheng Li*, Xingguang Duan*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

With the increasing adoption of robotic surgery systems, the need for automated surgical tasks has become more pressing. Recent learning-based approaches provide solutions to surgical automation but typically rely on low-dimensional observations. To further imitate the actions of surgeons in an end-to-end paradigm, this paper introduces a novel visual-based approach to automating surgical tasks using generative imitation learning for robotic systems. We develop a hybrid model integrating state space models transformer, and conditional variational autoencoders (CVAE) to enhance performance and generalization called ACMT. The proposed model, leveraging the Mamba block and multi-head cross-attention mechanisms for sequential modeling, achieves a 75-100% success rate with just 100 demonstrations for most of the tasks. This work significantly advances data-driven automation in surgical robotics, aiming to alleviate the burden on surgeons and improve surgical outcomes.

Original languageEnglish
JournalIEEE Transactions on Medical Robotics and Bionics
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Imitation learning
  • Surgical robots
  • surgical task automation

Fingerprint

Dive into the research topics of 'Visuomotor Policy Learning for Task Automation of Surgical Robot'. Together they form a unique fingerprint.

Cite this