MSTDKD: a framework of using multiple self-supervised methods for semisupervised learning

Jia Bin Liu*, Xuan Ming Zhang, Jun Hu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Image classification is a basic task in the field of computer vision, and general image classification task training requires a large amount of labeled data to achieve good generalization performance. However, in practical applications, the cost of obtaining labeled data is expensive. In contrast, unlabeled images are easy to obtain, so semi-supervised image classification is more meaningful for research. This paper pro- poses a framework for semi-supervised classification utilizing multiple self-supervised methods. Our approach is divided into three steps, firstly, pre-train multiple models on unlabeled data using different self-supervised methods. Then use the labeled data to fine-tune these models except the model pre-training by Contrastive learning to obtaining multiple self-supervised teacher models. Finally, the multi-teacher knowledge distillation framework is used to transfer the knowledge of multiple self-supervised teacher models to the model pre-training by Contrastive learning to help it achieve further performance. We conducted experiments on cifar10 and miniimagenet60. Our method achieves further results than using only a single self-supervised method, and also achieves superior performance compared to other semi-supervised methods.

Original languageEnglish
Title of host publicationThird International Symposium on Computer Engineering and Intelligent Communications, ISCEIC 2022
EditorsXianye Ben
PublisherSPIE
ISBN (Electronic)9781510660298
DOIs
Publication statusPublished - 2023
Externally publishedYes
Event3rd International Symposium on Computer Engineering and Intelligent Communications, ISCEIC 2022 - Xi'an, China
Duration: 16 Sept 202218 Sept 2022

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume12462
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

Conference3rd International Symposium on Computer Engineering and Intelligent Communications, ISCEIC 2022
Country/TerritoryChina
CityXi'an
Period16/09/2218/09/22

Keywords

  • Knowledge distillation
  • contrastive learning
  • image classification
  • self‐supervised method
  • semisupervised learning

Fingerprint

Dive into the research topics of 'MSTDKD: a framework of using multiple self-supervised methods for semisupervised learning'. Together they form a unique fingerprint.

Cite this