TY - GEN
T1 - RET-CLIP
T2 - 27th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2024
AU - Du, Jiawei
AU - Guo, Jia
AU - Zhang, Weihang
AU - Yang, Shengzhu
AU - Liu, Hanruo
AU - Li, Huiqi
AU - Wang, Ningli
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - The Vision-Language Foundation model is increasingly investigated in the fields of computer vision and natural language processing, yet its exploration in ophthalmology and broader medical applications remains limited. The challenge is the lack of labeled data for the training of foundation model. To handle this issue, a CLIP-style retinal image foundation model is developed in this paper. Our foundation model, RET-CLIP, is specifically trained on a dataset of 193,865 patients to extract general features of color fundus photographs (CFPs), employing a tripartite optimization strategy to focus on left eye, right eye, and patient level to reflect real-world clinical scenarios. Extensive experiments demonstrate that RET-CLIP outperforms existing benchmarks across eight diverse datasets spanning four critical diagnostic categories: diabetic retinopathy, glaucoma, multiple disease diagnosis, and multi-label classification of multiple diseases, which demonstrate the performance and generality of our foundation model. The sourse code and pre-trained model are available at https://github.com/sStonemason/RET-CLIP.
AB - The Vision-Language Foundation model is increasingly investigated in the fields of computer vision and natural language processing, yet its exploration in ophthalmology and broader medical applications remains limited. The challenge is the lack of labeled data for the training of foundation model. To handle this issue, a CLIP-style retinal image foundation model is developed in this paper. Our foundation model, RET-CLIP, is specifically trained on a dataset of 193,865 patients to extract general features of color fundus photographs (CFPs), employing a tripartite optimization strategy to focus on left eye, right eye, and patient level to reflect real-world clinical scenarios. Extensive experiments demonstrate that RET-CLIP outperforms existing benchmarks across eight diverse datasets spanning four critical diagnostic categories: diabetic retinopathy, glaucoma, multiple disease diagnosis, and multi-label classification of multiple diseases, which demonstrate the performance and generality of our foundation model. The sourse code and pre-trained model are available at https://github.com/sStonemason/RET-CLIP.
KW - Foundation Model
KW - Retinal Fundus Image
KW - Vision-Language Pre-training
UR - http://www.scopus.com/inward/record.url?scp=85208174466&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-72390-2_66
DO - 10.1007/978-3-031-72390-2_66
M3 - Conference contribution
AN - SCOPUS:85208174466
SN - 9783031723896
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 709
EP - 719
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 - 27th International Conference, Proceedings
A2 - Linguraru, Marius George
A2 - Dou, Qi
A2 - Feragen, Aasa
A2 - Giannarou, Stamatia
A2 - Glocker, Ben
A2 - Lekadir, Karim
A2 - Schnabel, Julia A.
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 6 October 2024 through 10 October 2024
ER -