Entity-aware and Motion-aware Transformers for Language-driven Action Localization

Shuo Yang, Xinxiao Wu*

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

8 引用 (Scopus)

摘要

Language-driven action localization in videos is a challenging task that involves not only visual-linguistic matching but also action boundary prediction. Recent progress has been achieved through aligning language query to video segments, but estimating precise boundaries is still under-explored. In this paper, we propose entity-aware and motion-aware Transformers that progressively localizes actions in videos by first coarsely locating clips with entity queries and then finely predicting exact boundaries in a shrunken temporal region with motion queries. The entity-aware Transformer incorporates the textual entities into visual representation learning via cross-modal and cross-frame attentions to facilitate attending action-related video clips. The motion-aware Transformer captures fine-grained motion changes at multiple temporal scales via integrating long short-term memory into the self-attention module to further improve the precision of action boundary prediction. Extensive experiments on the Charades-STA and TACoS datasets demonstrate that our method achieves better performance than existing methods.

源语言英语
主期刊名Proceedings of the 31st International Joint Conference on Artificial Intelligence, IJCAI 2022
编辑Luc De Raedt, Luc De Raedt
出版商International Joint Conferences on Artificial Intelligence
1552-1558
页数7
ISBN(电子版)9781956792003
出版状态已出版 - 2022
活动31st International Joint Conference on Artificial Intelligence, IJCAI 2022 - Vienna, 奥地利
期限: 23 7月 202229 7月 2022

出版系列

姓名IJCAI International Joint Conference on Artificial Intelligence
ISSN(印刷版)1045-0823

会议

会议31st International Joint Conference on Artificial Intelligence, IJCAI 2022
国家/地区奥地利
Vienna
时期23/07/2229/07/22

指纹

探究 'Entity-aware and Motion-aware Transformers for Language-driven Action Localization' 的科研主题。它们共同构成独一无二的指纹。

引用此