TY - JOUR
T1 - 多车协同目标跟踪方法
AU - Gong, Shixiong
AU - Wang, Xu
AU - Kong, Guojie
AU - Gong, Jianwei
N1 - Publisher Copyright:
© 2022 China Ordnance Society. All rights reserved.
PY - 2022/10
Y1 - 2022/10
N2 - Multi-vehicle information fusion technology is an important way to improve the perception of the environment of ground unmanned systems. To address the problem of discontinuous and unstable object tracking in single-vehicle sensors caused by vision occlusion and blind spots, a result-level fusion system model for centralized multi-vehicle cooperative perception is proposed. The system model uses lidar as the vehicle perception sensor and stands on the D-S evidence theory to fuse the environment grid maps constructed by different vehicles at the main control terminal to obtain a global static environment map. Based on this environment model, a multi-vehicle cooperative object detection and tracking method is designed. First, a maximum value suppression method is used to resolve the fusion conflict of detected objects. Then, a cascaded dynamic object matching and tracking management method is designed to complete object prediction and tracking and send the results to vehicles. The test results of a real-vehicle system composed of two unmanned vehicles suggest that when the object is occluded, the proposed multi-vehicle cooperative object detection and tracking architecture can obtain more comprehensive environmental information of the object than a single-vehicle perception system. No tracking object is missed, and no jump occurs. The error between the tracker's output position state result and the detection result is small. The state of the tracked object can be accurately estimated, and the tracking trajectory remains continuous, thus effectively improving the field of vision of the single-vehicle environment.
AB - Multi-vehicle information fusion technology is an important way to improve the perception of the environment of ground unmanned systems. To address the problem of discontinuous and unstable object tracking in single-vehicle sensors caused by vision occlusion and blind spots, a result-level fusion system model for centralized multi-vehicle cooperative perception is proposed. The system model uses lidar as the vehicle perception sensor and stands on the D-S evidence theory to fuse the environment grid maps constructed by different vehicles at the main control terminal to obtain a global static environment map. Based on this environment model, a multi-vehicle cooperative object detection and tracking method is designed. First, a maximum value suppression method is used to resolve the fusion conflict of detected objects. Then, a cascaded dynamic object matching and tracking management method is designed to complete object prediction and tracking and send the results to vehicles. The test results of a real-vehicle system composed of two unmanned vehicles suggest that when the object is occluded, the proposed multi-vehicle cooperative object detection and tracking architecture can obtain more comprehensive environmental information of the object than a single-vehicle perception system. No tracking object is missed, and no jump occurs. The error between the tracker's output position state result and the detection result is small. The state of the tracked object can be accurately estimated, and the tracking trajectory remains continuous, thus effectively improving the field of vision of the single-vehicle environment.
KW - ground unmanned system
KW - lidar
KW - multi-vehicle cooperative perception
KW - object detection
KW - object tracking
UR - http://www.scopus.com/inward/record.url?scp=85141923308&partnerID=8YFLogxK
U2 - 10.12382/bgxb.2021.0462
DO - 10.12382/bgxb.2021.0462
M3 - 文章
AN - SCOPUS:85141923308
SN - 1000-1093
VL - 43
SP - 2429
EP - 2442
JO - Binggong Xuebao/Acta Armamentarii
JF - Binggong Xuebao/Acta Armamentarii
IS - 10
ER -