TY - JOUR
T1 - Visualization of Cross-View Multi-Object Tracking for Surveillance Videos in Crossroad
AU - Liu, Cai Hong
AU - Zhang, Lei
AU - Huang, Hua
N1 - Publisher Copyright:
© 2018, Science Press. All right reserved.
PY - 2018/1/1
Y1 - 2018/1/1
N2 - Cross-view Multi-object tracking and displaying for large scene is a basic requirement of surveillance video intelligent processing system. However, because of the limited field of view of a single camera, it's impossible to meet the exact requirement for long distance object tracking. In this paper, we propose an algorithm for visualization of cross-view multi-object tracking. We try to combine object information from multiple views and achieve the goal of tracking object in a wider field of view that is obtained by stitching the multiple views with overlapping areas. The proposed algorithm mainly contains four steps: background stitching step, object detection step, cross-view multi-object tracking step and visualization step. In the background stitching step, at first, we need to know the mapping from each view to the reference view plane. But there are too few feature points to find in the crossroad and we can't find the corresponding points with traditional methods that based on the feature point detection and matching. So we present a half-interactive method to determine the corresponding points between each view and reference view. The half-interactive method is based on geometrical information in background image, such as the vanishing point, horizontal line. It's easy to operate and reliable. And then we can use the obtained point pairs to calculate the homography matrix, by which we can project the camera image plane onto reference plane. Then the SPHP algorithm is adopted. It is on account of that the severe projective distortion is introduced into the images after the perspective projection with the homography matrix and we use the SPHP algorithm to keep the perspective so that we can get an as wider as possible field of view. At last, we stitch the calibrated background images with linear fusion. Thus we get the mapping relation from each view to the stitching view. In the object detection step, we detect the objects with ViBe background modeling algorithm. For object detection, background subtraction is commonly used. One of most popular algorithm for background subtraction method is Gaussian mixture model. But in this paper, the ViBe algorithm is a more appropriate choice because it's less noisy and more convenient to solve the problem of ghost and the case when the object stops for a long time compared to the Gaussian mixture model. Although the ViBe algorithm is suitable, it's a pixel-level algorithm, and it is sensitive to shadows. So in order to get more correct object detection information, we have to remove the shadows to improve accuracy of object detection. Then, in the cross-view multi-object tracking step, we can use the obtained mapping relation to find the location of every object in reference view. And in each view, we achieve single-view multi-object tracking by Kalman filter. Once we get the trajectory in each single view, to determine the same object in different views by trajectory correspondence which uses the least mean square to matching the trajectory from different views. Thus finally in the visualization step, we implement the cross-view multi-object tracking and visualization algorithm. The experimental results demonstrate that our algorithm is efficient to the persistent object tracking and visualization.
AB - Cross-view Multi-object tracking and displaying for large scene is a basic requirement of surveillance video intelligent processing system. However, because of the limited field of view of a single camera, it's impossible to meet the exact requirement for long distance object tracking. In this paper, we propose an algorithm for visualization of cross-view multi-object tracking. We try to combine object information from multiple views and achieve the goal of tracking object in a wider field of view that is obtained by stitching the multiple views with overlapping areas. The proposed algorithm mainly contains four steps: background stitching step, object detection step, cross-view multi-object tracking step and visualization step. In the background stitching step, at first, we need to know the mapping from each view to the reference view plane. But there are too few feature points to find in the crossroad and we can't find the corresponding points with traditional methods that based on the feature point detection and matching. So we present a half-interactive method to determine the corresponding points between each view and reference view. The half-interactive method is based on geometrical information in background image, such as the vanishing point, horizontal line. It's easy to operate and reliable. And then we can use the obtained point pairs to calculate the homography matrix, by which we can project the camera image plane onto reference plane. Then the SPHP algorithm is adopted. It is on account of that the severe projective distortion is introduced into the images after the perspective projection with the homography matrix and we use the SPHP algorithm to keep the perspective so that we can get an as wider as possible field of view. At last, we stitch the calibrated background images with linear fusion. Thus we get the mapping relation from each view to the stitching view. In the object detection step, we detect the objects with ViBe background modeling algorithm. For object detection, background subtraction is commonly used. One of most popular algorithm for background subtraction method is Gaussian mixture model. But in this paper, the ViBe algorithm is a more appropriate choice because it's less noisy and more convenient to solve the problem of ghost and the case when the object stops for a long time compared to the Gaussian mixture model. Although the ViBe algorithm is suitable, it's a pixel-level algorithm, and it is sensitive to shadows. So in order to get more correct object detection information, we have to remove the shadows to improve accuracy of object detection. Then, in the cross-view multi-object tracking step, we can use the obtained mapping relation to find the location of every object in reference view. And in each view, we achieve single-view multi-object tracking by Kalman filter. Once we get the trajectory in each single view, to determine the same object in different views by trajectory correspondence which uses the least mean square to matching the trajectory from different views. Thus finally in the visualization step, we implement the cross-view multi-object tracking and visualization algorithm. The experimental results demonstrate that our algorithm is efficient to the persistent object tracking and visualization.
KW - Cross-view
KW - Homography
KW - Kalman filter
KW - Multi-object tracking
KW - Trajectory correspondence
KW - ViBe
UR - http://www.scopus.com/inward/record.url?scp=85048115311&partnerID=8YFLogxK
U2 - 10.11897/SP.J.1016.2018.00221
DO - 10.11897/SP.J.1016.2018.00221
M3 - Article
AN - SCOPUS:85048115311
SN - 0254-4164
VL - 41
SP - 221
EP - 235
JO - Jisuanji Xuebao/Chinese Journal of Computers
JF - Jisuanji Xuebao/Chinese Journal of Computers
IS - 1
ER -