TY - JOUR
T1 - A CNN-based prototype method of unstructured surgical state perception and navigation for an endovascular surgery robot
AU - Zhao, Yan
AU - Guo, Shuxiang
AU - Wang, Yuxin
AU - Cui, Jinxin
AU - Ma, Youchun
AU - Zeng, Yuwen
AU - Liu, Xinke
AU - Jiang, Yuhua
AU - Li, Youxinag
AU - Shi, Liwei
AU - Xiao, Nan
N1 - Publisher Copyright:
© 2019, International Federation for Medical and Biological Engineering.
PY - 2019/9/1
Y1 - 2019/9/1
N2 - Performance of robot-assisted endovascular surgery (ES) remains highly dependent on an individual surgeon’s skills, due to common adoption of master-slave robotic structure. Surgeons’ skill modeling and unstructured surgical state perception pose prohibitive challenges for an autonomous ES robot. In this paper, a novel convolutional neural network (CNN)-based framework is proposed to address these challenges for navigation of an ES robot based on surgeons’ skill learning. An operating action probability estimator is proposed by integrating a two-dimensional CNN, with which the features of a surgical state image are extracted and then directly mapped to the action probability. A one-dimensional CNN with multi-input is developed to recognize the guide wire operating force condition. An eye-hand collaborative servoing algorithm is proposed to combine the outputs of these two networks and to control the robot under a closed-loop architecture. A real-world ES robot is employed for data collection and task performance evaluation in laboratory condition. Compared with the state of the art, the CNN-based method shows its capability of adapting to different situations and achieves similar success rate and average operating time. Robotic operation performs similar operating trajectory and maintains similar level of operating force with manual operation. The CNN-based method can be easily extended to many other surgical robots. [Figure not available: see fulltext.].
AB - Performance of robot-assisted endovascular surgery (ES) remains highly dependent on an individual surgeon’s skills, due to common adoption of master-slave robotic structure. Surgeons’ skill modeling and unstructured surgical state perception pose prohibitive challenges for an autonomous ES robot. In this paper, a novel convolutional neural network (CNN)-based framework is proposed to address these challenges for navigation of an ES robot based on surgeons’ skill learning. An operating action probability estimator is proposed by integrating a two-dimensional CNN, with which the features of a surgical state image are extracted and then directly mapped to the action probability. A one-dimensional CNN with multi-input is developed to recognize the guide wire operating force condition. An eye-hand collaborative servoing algorithm is proposed to combine the outputs of these two networks and to control the robot under a closed-loop architecture. A real-world ES robot is employed for data collection and task performance evaluation in laboratory condition. Compared with the state of the art, the CNN-based method shows its capability of adapting to different situations and achieves similar success rate and average operating time. Robotic operation performs similar operating trajectory and maintains similar level of operating force with manual operation. The CNN-based method can be easily extended to many other surgical robots. [Figure not available: see fulltext.].
KW - Autonomous surgical robot
KW - Deep convolutional neural network
KW - Surgeons’ operating skill learning
KW - Unstructured surgical state perception
UR - http://www.scopus.com/inward/record.url?scp=85068159191&partnerID=8YFLogxK
U2 - 10.1007/s11517-019-02002-0
DO - 10.1007/s11517-019-02002-0
M3 - Article
AN - SCOPUS:85068159191
SN - 0140-0118
VL - 57
SP - 1875
EP - 1887
JO - Medical and Biological Engineering and Computing
JF - Medical and Biological Engineering and Computing
IS - 9
ER -