TY - JOUR
T1 - Augmented reality navigation with real-time tracking for facial repair surgery
AU - Shao, Long
AU - Fu, Tianyu
AU - Zheng, Zhao
AU - Zhao, Zehua
AU - Ding, Lele
AU - Fan, Jingfan
AU - Song, Hong
AU - Zhang, Tao
AU - Yang, Jian
N1 - Publisher Copyright:
© 2022, CARS.
PY - 2022/6
Y1 - 2022/6
N2 - Purpose: Facial repair surgeries (FRS) require accuracy for navigating the critical anatomy safely and quickly. The purpose of this paper is to develop a method to directly track the position of the patient using video data acquired from the single camera, which can achieve noninvasive, real time, and high positioning accuracy in FRS. Methods: Our method first performs camera calibration and registers the surface segmented from computed tomography to the patient. Then, a two-step constraint algorithm, which includes the feature local constraint and the distance standard deviation constraint, is used to find the optimal feature matching pair quickly. Finally, the movements of the camera and the patient decomposed from the image motion matrix are used to track the camera and the patient, respectively. Results: The proposed method achieved fusion error RMS of 1.44 ± 0.35, 1.50 ± 0.15, 1.63 ± 0.03 mm in skull phantom, cadaver mandible, and human experiments, respectively. The above errors of the proposed method were lower than those of the optical tracking system-based method. Additionally, the proposed method could process video streams up to 24 frames per second, which can meet the real-time requirements of FRS. Conclusions: The proposed method does not rely on tracking markers attached to the patient; it could be executed automatically to maintain the correct augmented reality scene and overcome the decrease in positioning accuracy caused by patient movement during surgery.
AB - Purpose: Facial repair surgeries (FRS) require accuracy for navigating the critical anatomy safely and quickly. The purpose of this paper is to develop a method to directly track the position of the patient using video data acquired from the single camera, which can achieve noninvasive, real time, and high positioning accuracy in FRS. Methods: Our method first performs camera calibration and registers the surface segmented from computed tomography to the patient. Then, a two-step constraint algorithm, which includes the feature local constraint and the distance standard deviation constraint, is used to find the optimal feature matching pair quickly. Finally, the movements of the camera and the patient decomposed from the image motion matrix are used to track the camera and the patient, respectively. Results: The proposed method achieved fusion error RMS of 1.44 ± 0.35, 1.50 ± 0.15, 1.63 ± 0.03 mm in skull phantom, cadaver mandible, and human experiments, respectively. The above errors of the proposed method were lower than those of the optical tracking system-based method. Additionally, the proposed method could process video streams up to 24 frames per second, which can meet the real-time requirements of FRS. Conclusions: The proposed method does not rely on tracking markers attached to the patient; it could be executed automatically to maintain the correct augmented reality scene and overcome the decrease in positioning accuracy caused by patient movement during surgery.
KW - Augmented reality
KW - Facial repair surgeries
KW - Motion estimation
KW - Pose tracking
UR - http://www.scopus.com/inward/record.url?scp=85126235266&partnerID=8YFLogxK
U2 - 10.1007/s11548-022-02589-0
DO - 10.1007/s11548-022-02589-0
M3 - Article
C2 - 35286586
AN - SCOPUS:85126235266
SN - 1861-6410
VL - 17
SP - 981
EP - 991
JO - International journal of computer assisted radiology and surgery
JF - International journal of computer assisted radiology and surgery
IS - 6
ER -