TY - JOUR
T1 - Dual-stage semantic segmentation of endoscopic surgical instruments
AU - Chen, Wenxin
AU - Wang, Kaifeng
AU - Song, Xinya
AU - Xie, Dongsheng
AU - Li, Xue
AU - Islam, Mobarakol
AU - Li, Changsheng
AU - Duan, Xingguang
N1 - Publisher Copyright:
© 2024 American Association of Physicists in Medicine.
PY - 2024
Y1 - 2024
N2 - Background: Endoscopic instrument segmentation is essential for ensuring the safety of robotic-assisted spinal endoscopic surgeries. However, due to the narrow operative region, intricate surrounding tissues, and limited visibility, achieving instrument segmentation within the endoscopic view remains challenging. Purpose: This work aims to devise a method to segment surgical instruments in endoscopic video. By designing an endoscopic image classification model, features of frames before and after the video are extracted to achieve continuous and precise segmentation of instruments in endoscopic videos. Methods: Deep learning techniques serve as the algorithmic core for constructing the convolutional neural network proposed in this study. The method comprises dual stages: image classification and instrument segmentation. MobileViT is employed for image classification, enabling the extraction of key features of different instruments and generating classification results. DeepLabv3+ is utilized for instrument segmentation. By training on distinct instruments separately, corresponding model parameters are obtained. Lastly, a flag caching mechanism along with a blur detection module is designed to effectively utilize the image features in consecutive frames. By incorporating specific parameters into the segmentation model, better segmentation of surgical instruments can be achieved in endoscopic videos. Results: The classification and segmentation models are evaluated on an endoscopic image dataset. In the dataset used for instrument segmentation, the training set consists of 7456 images, the validation set consists of 829 images, and the test set consists of 921 images. In the dataset used for image classification, the training set consists of 2400 images and the validation set consists of 600 images. The image classification model achieves an accuracy of 70% on the validation set. For the segmentation model, experiments are conducted on two common surgical instruments, and the mean Intersection over Union (mIoU) exceeds 98%. Furthermore, the proposed video segmentation method is tested using videos collected during surgeries, validating the effectiveness of the flag caching mechanism and blur detection module. Conclusions: Experimental results on the dataset demonstrate that the dual-stage video processing method excels in performing instrument segmentation tasks under endoscopic conditions. This advancement is significant for enhancing the intelligence level of robotic-assisted spinal endoscopic surgeries.
AB - Background: Endoscopic instrument segmentation is essential for ensuring the safety of robotic-assisted spinal endoscopic surgeries. However, due to the narrow operative region, intricate surrounding tissues, and limited visibility, achieving instrument segmentation within the endoscopic view remains challenging. Purpose: This work aims to devise a method to segment surgical instruments in endoscopic video. By designing an endoscopic image classification model, features of frames before and after the video are extracted to achieve continuous and precise segmentation of instruments in endoscopic videos. Methods: Deep learning techniques serve as the algorithmic core for constructing the convolutional neural network proposed in this study. The method comprises dual stages: image classification and instrument segmentation. MobileViT is employed for image classification, enabling the extraction of key features of different instruments and generating classification results. DeepLabv3+ is utilized for instrument segmentation. By training on distinct instruments separately, corresponding model parameters are obtained. Lastly, a flag caching mechanism along with a blur detection module is designed to effectively utilize the image features in consecutive frames. By incorporating specific parameters into the segmentation model, better segmentation of surgical instruments can be achieved in endoscopic videos. Results: The classification and segmentation models are evaluated on an endoscopic image dataset. In the dataset used for instrument segmentation, the training set consists of 7456 images, the validation set consists of 829 images, and the test set consists of 921 images. In the dataset used for image classification, the training set consists of 2400 images and the validation set consists of 600 images. The image classification model achieves an accuracy of 70% on the validation set. For the segmentation model, experiments are conducted on two common surgical instruments, and the mean Intersection over Union (mIoU) exceeds 98%. Furthermore, the proposed video segmentation method is tested using videos collected during surgeries, validating the effectiveness of the flag caching mechanism and blur detection module. Conclusions: Experimental results on the dataset demonstrate that the dual-stage video processing method excels in performing instrument segmentation tasks under endoscopic conditions. This advancement is significant for enhancing the intelligence level of robotic-assisted spinal endoscopic surgeries.
KW - deep learning
KW - instrument segmentation
KW - spine endoscopic surgery
UR - http://www.scopus.com/inward/record.url?scp=85203529889&partnerID=8YFLogxK
U2 - 10.1002/mp.17397
DO - 10.1002/mp.17397
M3 - Article
AN - SCOPUS:85203529889
SN - 0094-2405
JO - Medical Physics
JF - Medical Physics
ER -