TY - GEN
T1 - MSTDKD
T2 - 3rd International Symposium on Computer Engineering and Intelligent Communications, ISCEIC 2022
AU - Liu, Jia Bin
AU - Zhang, Xuan Ming
AU - Hu, Jun
N1 - Publisher Copyright:
© The Authors. Published under a Creative Commons Attribution CC-BY 3.0 License.
PY - 2023
Y1 - 2023
N2 - Image classification is a basic task in the field of computer vision, and general image classification task training requires a large amount of labeled data to achieve good generalization performance. However, in practical applications, the cost of obtaining labeled data is expensive. In contrast, unlabeled images are easy to obtain, so semi-supervised image classification is more meaningful for research. This paper pro- poses a framework for semi-supervised classification utilizing multiple self-supervised methods. Our approach is divided into three steps, firstly, pre-train multiple models on unlabeled data using different self-supervised methods. Then use the labeled data to fine-tune these models except the model pre-training by Contrastive learning to obtaining multiple self-supervised teacher models. Finally, the multi-teacher knowledge distillation framework is used to transfer the knowledge of multiple self-supervised teacher models to the model pre-training by Contrastive learning to help it achieve further performance. We conducted experiments on cifar10 and miniimagenet60. Our method achieves further results than using only a single self-supervised method, and also achieves superior performance compared to other semi-supervised methods.
AB - Image classification is a basic task in the field of computer vision, and general image classification task training requires a large amount of labeled data to achieve good generalization performance. However, in practical applications, the cost of obtaining labeled data is expensive. In contrast, unlabeled images are easy to obtain, so semi-supervised image classification is more meaningful for research. This paper pro- poses a framework for semi-supervised classification utilizing multiple self-supervised methods. Our approach is divided into three steps, firstly, pre-train multiple models on unlabeled data using different self-supervised methods. Then use the labeled data to fine-tune these models except the model pre-training by Contrastive learning to obtaining multiple self-supervised teacher models. Finally, the multi-teacher knowledge distillation framework is used to transfer the knowledge of multiple self-supervised teacher models to the model pre-training by Contrastive learning to help it achieve further performance. We conducted experiments on cifar10 and miniimagenet60. Our method achieves further results than using only a single self-supervised method, and also achieves superior performance compared to other semi-supervised methods.
KW - Knowledge distillation
KW - contrastive learning
KW - image classification
KW - self‐supervised method
KW - semisupervised learning
UR - http://www.scopus.com/inward/record.url?scp=85159271999&partnerID=8YFLogxK
U2 - 10.1117/12.2661030
DO - 10.1117/12.2661030
M3 - Conference contribution
AN - SCOPUS:85159271999
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Third International Symposium on Computer Engineering and Intelligent Communications, ISCEIC 2022
A2 - Ben, Xianye
PB - SPIE
Y2 - 16 September 2022 through 18 September 2022
ER -