TY - JOUR
T1 - 基于场景理解的双波段彩色融合图像质量评价
AU - Gao, Shaoshu
AU - Tian, Qilin
AU - Jin, Weiqi
AU - Yi, Sheng
AU - Ni, Xiao
AU - Cheng, Changlong
N1 - Publisher Copyright:
© 2023 Beijing Institute of Technology. All rights reserved.
PY - 2023
Y1 - 2023
N2 - In order to measure the comprehensive quality of visible and infrared color fusion images for specific visual tasks, an objective evaluation model was proposed based on scene understanding for the comprehensive quality of two-band color fusion images. The model was arranged with three parts, fusion image feature extractor, neighborhood co-occurrence matrix feature extractor and weight generator. Firstly, the feature extract- or was used to extract pixel intensity information from fused images. Then, the feature extractor of neighborhood co-occurrence matrix was established to extract pixel spatial relationship information from neighborhood co-occurrence matrix. Finally, the weight generator was built, a neural network model was used to extract the structure information from the gradient map, and the position information was combined with the structure information to generate the weight. The experimental results show that the proposed method can improve the consistency between the model prediction value and the subjective perception of human eyes on the basis of the extraction of abundant image features, and realize the objective evaluation of the integrated quality of the image fusion.
AB - In order to measure the comprehensive quality of visible and infrared color fusion images for specific visual tasks, an objective evaluation model was proposed based on scene understanding for the comprehensive quality of two-band color fusion images. The model was arranged with three parts, fusion image feature extractor, neighborhood co-occurrence matrix feature extractor and weight generator. Firstly, the feature extract- or was used to extract pixel intensity information from fused images. Then, the feature extractor of neighborhood co-occurrence matrix was established to extract pixel spatial relationship information from neighborhood co-occurrence matrix. Finally, the weight generator was built, a neural network model was used to extract the structure information from the gradient map, and the position information was combined with the structure information to generate the weight. The experimental results show that the proposed method can improve the consistency between the model prediction value and the subjective perception of human eyes on the basis of the extraction of abundant image features, and realize the objective evaluation of the integrated quality of the image fusion.
KW - color fusion image
KW - gradient map
KW - image quality evaluation
KW - neighborhood co-occurrence matrix
UR - http://www.scopus.com/inward/record.url?scp=85178634374&partnerID=8YFLogxK
U2 - 10.15918/j.tbit1001-0645.2023.032
DO - 10.15918/j.tbit1001-0645.2023.032
M3 - 文章
AN - SCOPUS:85178634374
SN - 1001-0645
VL - 43
SP - 1205
EP - 1212
JO - Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology
JF - Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology
IS - 11
ER -