Automated Layer Segmentation of Retinal Optical Coherence Tomography Images Using a Deep Feature Enhanced Structured Random Forests Classifier

  • Xiaoming Liu*
  • , Tianyu Fu
  • , Zhifang Pan
  • , Dong Liu
  • , Wei Hu
  • , Jun Liu
  • , Kai Zhang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

52 Citations (Scopus)

Abstract

Optical coherence tomography (OCT) is a high-resolution and noninvasive imaging modality that has become one of the most prevalent techniques for ophthalmic diagnosis. Retinal layer segmentation is very crucial for doctors to diagnose and study retinal diseases. However, manual segmentation is often a time-consuming and subjective process. In this work, we propose a new method for automatically segmenting retinal OCT images, which integrates deep features and hand-designed features to train a structured random forests classifier. The deep convolutional features are learned from deep residual network. With the trained classifier, we can get the contour probability graph of each layer; finally, the shortest path is employed to achieve the final layer segmentation. The experimental results show that our method achieves good results with the mean layer contour error of 1.215 pixels, whereas that of the state of the art was 1.464 pixels, and achieves an F1-score of 0.885, which is also better than 0.863 that is obtained by the state of the art method.

Original languageEnglish
Article number8411332
Pages (from-to)1404-1416
Number of pages13
JournalIEEE Journal of Biomedical and Health Informatics
Volume23
Issue number4
DOIs
Publication statusPublished - Jul 2019
Externally publishedYes

Keywords

  • OCT
  • convolutional neural networks
  • image processing
  • layer segmentation
  • structured random forests

Fingerprint

Dive into the research topics of 'Automated Layer Segmentation of Retinal Optical Coherence Tomography Images Using a Deep Feature Enhanced Structured Random Forests Classifier'. Together they form a unique fingerprint.

Cite this