Bidirectional prediction of facial and bony shapes for orthognathic surgical planning

Lei Ma, Chunfeng Lian, Daeseung Kim, Deqiang Xiao, Dongming Wei, Qin Liu, Tianshu Kuang, Maryam Ghanbari, Guoshi Li, Jaime Gateno, Steve G.F. Shen, Li Wang, Dinggang Shen, James J. Xia*, Pew Thian Yap*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

This paper proposes a deep learning framework to encode subject-specific transformations between facial and bony shapes for orthognathic surgical planning. Our framework involves a bidirectional point-to-point convolutional network (P2P-Conv) to predict the transformations between facial and bony shapes. P2P-Conv is an extension of the state-of-the-art P2P-Net and leverages dynamic point-wise convolution (i.e., PointConv) to capture local-to-global spatial information. Data augmentation is carried out in the training of P2P-Conv with multiple point subsets from the facial and bony shapes. During inference, network outputs generated for multiple point subsets are combined into a dense transformation. Finally, non-rigid registration using the coherent point drift (CPD) algorithm is applied to generate surface meshes based on the predicted point sets. Experimental results on real-subject data demonstrate that our method substantially improves the prediction of facial and bony shapes over state-of-the-art methods.

Original languageEnglish
Article number102644
JournalMedical Image Analysis
Volume83
DOIs
Publication statusPublished - Jan 2023
Externally publishedYes

Keywords

  • 3D point clouds
  • Face-bone shape transformation
  • Orthognathic surgical planning
  • Point-displacement network

Fingerprint

Dive into the research topics of 'Bidirectional prediction of facial and bony shapes for orthognathic surgical planning'. Together they form a unique fingerprint.

Cite this