TY - GEN
T1 - QS-Hyper
T2 - 28th International Conference on Neural Information Processing, ICONIP 2021
AU - Zhang, Xuewen
AU - Zhang, Yunye
AU - Yu, Wenxin
AU - Nie, Liang
AU - Jiang, Ning
AU - Gong, Jun
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Blind/no-reference image quality assessment (IQA) aims to provide a quality score for a single image without references. In this context, deep learning models can capture various image artifacts, which made significant progress in this study. However, current IQA methods generally utilize the pre-trained convolution neural networks (CNNs) on classification tasks to obtain image representations, which do not perfectly represent the quality of images. In order to solve this problem, this paper uses semi-supervised representation learning to train a quality-sensitive encoder (QS-encoder), which can extract image features specifically for image quality. Intuitively, this feature is more conducive to train the IQA model than the feature used for classification tasks. Thus, QS-encoder is plunged into a carefully designed hyper network to build a quality-sensitive hyper network (QS-hyper) to solve IQA tasks in more general and complex environments. Extensive experiments on the public IQA datasets show that our method outperformed most state-of-art methods on both Pearson linear correlation coefficient (PLCC) and Spearman’s rank correlation coefficient (SRCC), and it made 3% PLCC improvement and 3.9% SRCC improvement on TID2013 datasets. Therefore, it proves that our method is superior in capturing various image distortions, which meets a broader range of evaluation requirements.
AB - Blind/no-reference image quality assessment (IQA) aims to provide a quality score for a single image without references. In this context, deep learning models can capture various image artifacts, which made significant progress in this study. However, current IQA methods generally utilize the pre-trained convolution neural networks (CNNs) on classification tasks to obtain image representations, which do not perfectly represent the quality of images. In order to solve this problem, this paper uses semi-supervised representation learning to train a quality-sensitive encoder (QS-encoder), which can extract image features specifically for image quality. Intuitively, this feature is more conducive to train the IQA model than the feature used for classification tasks. Thus, QS-encoder is plunged into a carefully designed hyper network to build a quality-sensitive hyper network (QS-hyper) to solve IQA tasks in more general and complex environments. Extensive experiments on the public IQA datasets show that our method outperformed most state-of-art methods on both Pearson linear correlation coefficient (PLCC) and Spearman’s rank correlation coefficient (SRCC), and it made 3% PLCC improvement and 3.9% SRCC improvement on TID2013 datasets. Therefore, it proves that our method is superior in capturing various image distortions, which meets a broader range of evaluation requirements.
KW - Blind image quality assessment
KW - Convolution neural networks
KW - Representation learning
UR - http://www.scopus.com/inward/record.url?scp=85121934389&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-92273-3_26
DO - 10.1007/978-3-030-92273-3_26
M3 - Conference contribution
AN - SCOPUS:85121934389
SN - 9783030922726
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 311
EP - 322
BT - Neural Information Processing - 28th International Conference, ICONIP 2021, Proceedings
A2 - Mantoro, Teddy
A2 - Lee, Minho
A2 - Ayu, Media Anugerah
A2 - Wong, Kok Wai
A2 - Hidayanto, Achmad Nizar
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 8 December 2021 through 12 December 2021
ER -