TY - JOUR
T1 - Image quality assessment based on self-supervised learning and knowledge distillation
AU - Sang, Qingbing
AU - Shu, Ziru
AU - Liu, Lixiong
AU - Hu, Cong
AU - Wu, Qin
N1 - Publisher Copyright:
© 2022 Elsevier Inc.
PY - 2023/2
Y1 - 2023/2
N2 - Deep neural networks have achieved great success in a wide range of machine learning tasks due to their excellent ability to learn rich semantic features from high-dimensional data. Deeper networks have been successful in the field of image quality assessment to improve the performance of image quality assessment models. The success of deep neural networks majorly comes along with both big models with hundreds of millions of parameters and the availability of numerous annotated datasets. However, the lack of large-scale labeled data leads to the problems of over-fitting and poor generalization of deep learning models. Besides, these models are huge in size, demanding heavy computation power and failing to be deployed on edge devices. To deal with the challenge, we propose an image quality assessment based on self-supervised learning and knowledge distillation. First, the self-supervised learning of soft target prediction given by the teacher network is carried out, and then the student network is jointly trained to use soft target and label on knowledge distillation. Experiments on five benchmark databases show that the proposed method is superior to the teacher network and even outperform the state-of-the-art strategies. Furthermore, the scale of our model is much smaller than the teacher model and can be deployed in edge devices for smooth inference.
AB - Deep neural networks have achieved great success in a wide range of machine learning tasks due to their excellent ability to learn rich semantic features from high-dimensional data. Deeper networks have been successful in the field of image quality assessment to improve the performance of image quality assessment models. The success of deep neural networks majorly comes along with both big models with hundreds of millions of parameters and the availability of numerous annotated datasets. However, the lack of large-scale labeled data leads to the problems of over-fitting and poor generalization of deep learning models. Besides, these models are huge in size, demanding heavy computation power and failing to be deployed on edge devices. To deal with the challenge, we propose an image quality assessment based on self-supervised learning and knowledge distillation. First, the self-supervised learning of soft target prediction given by the teacher network is carried out, and then the student network is jointly trained to use soft target and label on knowledge distillation. Experiments on five benchmark databases show that the proposed method is superior to the teacher network and even outperform the state-of-the-art strategies. Furthermore, the scale of our model is much smaller than the teacher model and can be deployed in edge devices for smooth inference.
KW - Image quality evaluation
KW - Knowledge distillation
KW - Self-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85143502677&partnerID=8YFLogxK
U2 - 10.1016/j.jvcir.2022.103708
DO - 10.1016/j.jvcir.2022.103708
M3 - Review article
AN - SCOPUS:85143502677
SN - 1047-3203
VL - 90
JO - Journal of Visual Communication and Image Representation
JF - Journal of Visual Communication and Image Representation
M1 - 103708
ER -