TY - GEN
T1 - FuDA-YOLO
T2 - 4th International Symposium on Computer Applications and Information Technology, ISCAIT 2025
AU - Wu, Jiaqi
AU - Liu, Haowei
AU - Wang, Chongwen
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Low-light Object Detection (LOD) is a significant research field in computer vision. In some application fields of automatic driving, security monitoring and so on, it has great value and broad prospect. Low-light conditions means that environment light is insufficient, the loss of image information is serious, and the noise increases. However, most of the traditional object detection methods consider only normal light conditions and their models are also trained that way, leading to a poor performance. A common solution is to use image enhancement technology to improve the brightness of an image and furthest restore its details, and then use a normal object detection network to complete the detection task. These methods cannot well meet the real-time requirements and the evaluation criteria of image enhancement are mostly based on human vision, not suitable for machine vision tasks. Moreover, one of the difficulties of LOD is the lack of annotation data, and it is difficult to collect or make new low-light datasets. To solve these problems, we propose FuDA-YOLO, a LOD method by introducing unsupervised domain adaptation into the YOLO network. The main structure of our FuDA-YOLO is a Multi-scale Fusion Domain Adaptation Module combined with YOLOv8n, to reduce the domain difference between source and target domains. We have carried out a series of experiments to prove that our method can significantly improve model's detection accuracy, domain adaptation ability and inference speed.
AB - Low-light Object Detection (LOD) is a significant research field in computer vision. In some application fields of automatic driving, security monitoring and so on, it has great value and broad prospect. Low-light conditions means that environment light is insufficient, the loss of image information is serious, and the noise increases. However, most of the traditional object detection methods consider only normal light conditions and their models are also trained that way, leading to a poor performance. A common solution is to use image enhancement technology to improve the brightness of an image and furthest restore its details, and then use a normal object detection network to complete the detection task. These methods cannot well meet the real-time requirements and the evaluation criteria of image enhancement are mostly based on human vision, not suitable for machine vision tasks. Moreover, one of the difficulties of LOD is the lack of annotation data, and it is difficult to collect or make new low-light datasets. To solve these problems, we propose FuDA-YOLO, a LOD method by introducing unsupervised domain adaptation into the YOLO network. The main structure of our FuDA-YOLO is a Multi-scale Fusion Domain Adaptation Module combined with YOLOv8n, to reduce the domain difference between source and target domains. We have carried out a series of experiments to prove that our method can significantly improve model's detection accuracy, domain adaptation ability and inference speed.
KW - Domain Adaptation
KW - Low-Light Image Enhancement
KW - Low-Light Object Detection
KW - Multi-scale Feature Fusion
UR - https://www.scopus.com/pages/publications/105010188371
U2 - 10.1109/ISCAIT64916.2025.11010400
DO - 10.1109/ISCAIT64916.2025.11010400
M3 - Conference contribution
AN - SCOPUS:105010188371
T3 - 2025 4th International Symposium on Computer Applications and Information Technology, ISCAIT 2025
SP - 30
EP - 35
BT - 2025 4th International Symposium on Computer Applications and Information Technology, ISCAIT 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 21 March 2025 through 23 March 2025
ER -