TaCoTrack: Tracking Object with Temporal Context

Zhixuan Wang*, Bo Wang

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

The performance of visual object tracking generally depends on the extracted information of continuous frames. However, existing trackers cannot leverage temporal contexts to extract enough information from frames and does not adapt well to various challenges. In this paper, we present a neat and efficient framework, TaCoTrak, which completely exploits temporal contexts for object tracking. The temporal context are employed in two perspectives, the fusion of features and the refinement of search-response features. Specifically, for feature fusion, a dynamic self-adaptive convolution, which provides the capability of spatial feature representation, is designed to fuse the features extracted from multiple input frames with temporal information. For search-response feature refinement, we construct a temporal convolution with weights and bias change with each input. The extensive experiments fully demonstrate the effective and robust performance of TaCoTrak.

源语言英语
主期刊名Proceedings of the 35th Chinese Control and Decision Conference, CCDC 2023
出版商Institute of Electrical and Electronics Engineers Inc.
4068-4073
页数6
ISBN(电子版)9798350334722
DOI
出版状态已出版 - 2023
活动35th Chinese Control and Decision Conference, CCDC 2023 - Yichang, 中国
期限: 20 5月 202322 5月 2023

出版系列

姓名Proceedings of the 35th Chinese Control and Decision Conference, CCDC 2023

会议

会议35th Chinese Control and Decision Conference, CCDC 2023
国家/地区中国
Yichang
时期20/05/2322/05/23

指纹

探究 'TaCoTrack: Tracking Object with Temporal Context' 的科研主题。它们共同构成独一无二的指纹。

引用此