RDR-KD: A Knowledge Distillation Detection Framework for Drone Scenes

Jinxiang Huang, Hong Chang, Xin Yang*, Yong Liu, Shuqi Liu, Yong Song*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Drone object detection (DOD) with real-time deployment is a research hotspot. On the one hand, the performance of tiny object detection is closely related to the ground detection capability of the drone platform. Existing methods are keen on designing complex networks to enhance the accuracy of tiny objects, which significantly increases computational costs. On the other hand, the limited drone hardware resources urgently require lightweight models for deployment. To address the dilemma of balancing detection accuracy and computational efficiency, we propose a regenerated-decoupled-responsive knowledge distillation (RDR-KD) framework specifically for drone scenes. First, we design the Regenerated Distillation and the Decoupled Distillation to fully transfer the tiny object feature information from the teacher model to the student model. Meanwhile, we devise the logit-based Responsive Distillation based on focal loss and efficient intersection over union (EIoU) to alleviate class imbalance. Finally, we conduct extensive experiments on the VisDrone2019 dataset. The experimental results demonstrate that the proposed RDR-KD framework improves AP and {mathrm{AP}}-{S} of the student model by 3.3% and 2.9% respectively, which outperforms other state-of-the-art distillation frameworks.

Original languageEnglish
Article number6008705
Pages (from-to)1-5
Number of pages5
JournalIEEE Geoscience and Remote Sensing Letters
Volume21
DOIs
Publication statusPublished - 2024

Keywords

  • Drone object detection (DOD)
  • feature distillation
  • knowledge distillation (KD)
  • responsive distillation

Fingerprint

Dive into the research topics of 'RDR-KD: A Knowledge Distillation Detection Framework for Drone Scenes'. Together they form a unique fingerprint.

Cite this