TY - GEN
T1 - A Benchmark for Vision-based Multi-UAV Multi-object Tracking
AU - Shen, Hao
AU - Yang, Xiwen
AU - Lin, Defu
AU - Chai, Jianduo
AU - Huo, Jiakai
AU - Xing, Xiaofeng
AU - He, Shaoming
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Vision-based multi-sensor multi-object tracking is a fundamental task in the applications of a swarm of Unmanned Aerial Vehicles (UAVs). The benchmark datasets are critical to the development of computer vision research since they can provide a fair and principled way to evaluate various approaches and promote the improvement of corresponding algorithms. In recent years, many benchmarks have been created for single-camera single-object tracking, single-camera multi-object detection, and single-camera multi-object tracking scenarios. However, up to the best of our knowledge, few benchmarks of multi-camera multi-object tracking have been provided. In this paper, we build a dataset for multi-UAV multi-object tracking tasks to fill the gap. Several cameras are placed in the VICON motion capture system to simulate the UAV team, and several toy cars are employed to represent ground targets. The first-perspective videos from the cameras, the motion states of the cameras, and the ground truth of the objects are recorded. We also propose a metric to evaluate the performance of the multi-UAV multi-object tracking task. The dataset and the code for algorithm evaluation are available at our GitHub (https://github.com/bitshenwenxiao/MUMO).
AB - Vision-based multi-sensor multi-object tracking is a fundamental task in the applications of a swarm of Unmanned Aerial Vehicles (UAVs). The benchmark datasets are critical to the development of computer vision research since they can provide a fair and principled way to evaluate various approaches and promote the improvement of corresponding algorithms. In recent years, many benchmarks have been created for single-camera single-object tracking, single-camera multi-object detection, and single-camera multi-object tracking scenarios. However, up to the best of our knowledge, few benchmarks of multi-camera multi-object tracking have been provided. In this paper, we build a dataset for multi-UAV multi-object tracking tasks to fill the gap. Several cameras are placed in the VICON motion capture system to simulate the UAV team, and several toy cars are employed to represent ground targets. The first-perspective videos from the cameras, the motion states of the cameras, and the ground truth of the objects are recorded. We also propose a metric to evaluate the performance of the multi-UAV multi-object tracking task. The dataset and the code for algorithm evaluation are available at our GitHub (https://github.com/bitshenwenxiao/MUMO).
UR - http://www.scopus.com/inward/record.url?scp=85140965198&partnerID=8YFLogxK
U2 - 10.1109/MFI55806.2022.9913874
DO - 10.1109/MFI55806.2022.9913874
M3 - Conference contribution
AN - SCOPUS:85140965198
T3 - IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems
BT - 2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI 2022
Y2 - 20 September 2022 through 22 September 2022
ER -