UniRTL: A universal RGBT and low-light benchmark for object tracking

Lian Zhang, Lingxue Wang*, Yuzhen Wu, Mingkun Chen, Dezhi Zheng, Liangcai Cao, Bangze Zeng, Yi Cai

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Solving single- and multiple-object tracking problems with a single network is challenging in the RGBT tracking. We present a universal RGBT and low-light benchmark (UniRTL), which contains 3 × 626 videos for SOT and 3 × 50 videos for MOT, totally with more than 158K frame triplet. The dataset is divided into low-, middle-, and high-illuminance categories based on the measurement of the scene illuminance. We also propose a SOT and MOT unified tracking-with-detection tracker (Unismot) that comprises a detector, first-frame target prior (FTP), and data associator. SOT and MOT are unified by feeding FTP into the detector and data associator. Re-ID long-term matching module and reusing low-score bounding boxes are proposed to augment SOT and MOT performance, respectively. Experiments demonstrate that Unismot performs as well as or better than its counterparts on established RGBT tracking datasets. This work promotes a universal multimodal tracking throughout day and night.

源语言英语
文章编号110984
期刊Pattern Recognition
158
DOI
出版状态已出版 - 2月 2025

指纹

探究 'UniRTL: A universal RGBT and low-light benchmark for object tracking' 的科研主题。它们共同构成独一无二的指纹。

引用此