TY - GEN
T1 - A Vehicle Target Detection Method Based on Feature Level Fusion of Infrared and Visible Light Image
AU - Xin, Dong
AU - Xu, Lixin
AU - Chen, Huimin
AU - Yang, Xu
AU - Zhang, Ruiheng
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Single-mode recognition method remains a difficulty problem in target detection and recognition of road vehicle targets in complex urban situations. Hence, using the advantages of obtaining different feature information from infrared and visible images in different situations is considered. We propose a feature level infrared and visible image fusion target detection method based on deep learning. This method first obtains the registered infrared visible image, extracts the image features respectively through two main feature extraction networks, passes through the feature fusion layer, passes into the feature pyramid network to obtain the effective feature layer, and then carries out classification prediction and regression prediction. On the test set, the mAP of the fusion method is 0.89, which is higher than that using only visible images (the mAP is 0.82) and only infrared images (the mAP is 0.79) on the same test set. At the same time, in the night environment, the mAP of the fusion method is much higher than other deep learning frameworks. The experimental results show that the infrared and visible image fusion target detection method realized in this paper has certain advantages over the traditional methods and has a good application prospect.
AB - Single-mode recognition method remains a difficulty problem in target detection and recognition of road vehicle targets in complex urban situations. Hence, using the advantages of obtaining different feature information from infrared and visible images in different situations is considered. We propose a feature level infrared and visible image fusion target detection method based on deep learning. This method first obtains the registered infrared visible image, extracts the image features respectively through two main feature extraction networks, passes through the feature fusion layer, passes into the feature pyramid network to obtain the effective feature layer, and then carries out classification prediction and regression prediction. On the test set, the mAP of the fusion method is 0.89, which is higher than that using only visible images (the mAP is 0.82) and only infrared images (the mAP is 0.79) on the same test set. At the same time, in the night environment, the mAP of the fusion method is much higher than other deep learning frameworks. The experimental results show that the infrared and visible image fusion target detection method realized in this paper has certain advantages over the traditional methods and has a good application prospect.
KW - Feature fusion
KW - Infrared image
KW - Object detection
KW - Visible image
UR - http://www.scopus.com/inward/record.url?scp=85149568839&partnerID=8YFLogxK
U2 - 10.1109/CCDC55256.2022.10033899
DO - 10.1109/CCDC55256.2022.10033899
M3 - Conference contribution
AN - SCOPUS:85149568839
T3 - Proceedings of the 34th Chinese Control and Decision Conference, CCDC 2022
SP - 469
EP - 474
BT - Proceedings of the 34th Chinese Control and Decision Conference, CCDC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 34th Chinese Control and Decision Conference, CCDC 2022
Y2 - 15 August 2022 through 17 August 2022
ER -