基于场景理解的双波段彩色融合图像质量评价

Translated title of the contribution: Quality Evaluation of Two-Band Color Fusion Image Based on Scene Understanding

Shaoshu Gao, Qilin Tian*, Weiqi Jin, Sheng Yi, Xiao Ni, Changlong Cheng

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In order to measure the comprehensive quality of visible and infrared color fusion images for specific visual tasks, an objective evaluation model was proposed based on scene understanding for the comprehensive quality of two-band color fusion images. The model was arranged with three parts, fusion image feature extractor, neighborhood co-occurrence matrix feature extractor and weight generator. Firstly, the feature extract- or was used to extract pixel intensity information from fused images. Then, the feature extractor of neighborhood co-occurrence matrix was established to extract pixel spatial relationship information from neighborhood co-occurrence matrix. Finally, the weight generator was built, a neural network model was used to extract the structure information from the gradient map, and the position information was combined with the structure information to generate the weight. The experimental results show that the proposed method can improve the consistency between the model prediction value and the subjective perception of human eyes on the basis of the extraction of abundant image features, and realize the objective evaluation of the integrated quality of the image fusion.

Translated title of the contributionQuality Evaluation of Two-Band Color Fusion Image Based on Scene Understanding
Original languageChinese (Traditional)
Pages (from-to)1205-1212
Number of pages8
JournalBeijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology
Volume43
Issue number11
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'Quality Evaluation of Two-Band Color Fusion Image Based on Scene Understanding'. Together they form a unique fingerprint.

Cite this