RFID-assisted visual multiple object tracking without using visual appearance and motion

Rongzihan Song*, Zihao Wang*, Jia Guo*, Boon Siew Han, Alvin Hong Yee Wong, Lei Sun, Zhiping Lin*

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Visual Multiple Object Tracking (MOT) typically utilizes appearance and motion clues for associations. However, these features may be limited under certain challenging scenarios, such as appearance ambiguity and frequent occlusions. In this paper, we introduce a novel deep RF-affinity neural network (DRFAN) that enhances visual tracking with the aid of a passive wireless positioning device, Radio Frequency Identification (RFID). DRFAN aims to solve object tracking by introducing a new concept of a "candidate trajectory"to indicate target movement. This approach fundamentally deviates from existing fusion methods that rely on known visual tracks. Instead, DRFAN exclusively uses detection bounding boxes and RFID signals. The proposed method overcomes the limitations of visual tracking by swiftly resuming correct tracking whenever a failure occurs. This is the first time using signals from low-cost passive RFID tags to achieve image-level localization, and a discriminative neural network is designed specifically for RFID-assisted visual association. Our experimental results validate the robustness and applicability of the proposed approach.

源语言英语
主期刊名2023 IEEE International Conference on Image Processing, ICIP 2023 - Proceedings
出版商IEEE Computer Society
2745-2749
页数5
ISBN(电子版)9781728198354
DOI
出版状态已出版 - 2023
活动30th IEEE International Conference on Image Processing, ICIP 2023 - Kuala Lumpur, 马来西亚
期限: 8 10月 202311 10月 2023

出版系列

姓名Proceedings - International Conference on Image Processing, ICIP
ISSN(印刷版)1522-4880

会议

会议30th IEEE International Conference on Image Processing, ICIP 2023
国家/地区马来西亚
Kuala Lumpur
时期8/10/2311/10/23

指纹

探究 'RFID-assisted visual multiple object tracking without using visual appearance and motion' 的科研主题。它们共同构成独一无二的指纹。

引用此