摘要
With the increasing adoption of robotic surgery systems, the need for automated surgical tasks has become more pressing. Recent learning-based approaches provide solutions to surgical automation but typically rely on low-dimensional observations. To further imitate the actions of surgeons in an end-to-end paradigm, this paper introduces a novel visual-based approach to automating surgical tasks using generative imitation learning for robotic systems. We develop a hybrid model integrating state space models transformer, and conditional variational autoencoders (CVAE) to enhance performance and generalization called ACMT. The proposed model, leveraging the Mamba block and multi-head cross-attention mechanisms for sequential modeling, achieves a 75-100% success rate with just 100 demonstrations for most of the tasks. This work significantly advances data-driven automation in surgical robotics, aiming to alleviate the burden on surgeons and improve surgical outcomes.
源语言 | 英语 |
---|---|
页(从-至) | 1448-1457 |
页数 | 10 |
期刊 | IEEE Transactions on Medical Robotics and Bionics |
卷 | 6 |
期 | 4 |
DOI | |
出版状态 | 已出版 - 2024 |