Temporal Attention-Pyramid Pooling for Temporal Action Detection

Ming Gang Gan, Yan Zhang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

Temporal action detection is a challenging task in video understanding, which is usually divided into two stages: proposal generation and classification. Learning proposal features is a crucial step for both stages. However, most methods ignore temporal information of proposals and consider background and action frames in proposals equally, leading to poor proposal features. In this paper, we propose a novel Temporal Attention-Pyramid Pooling (TAPP) method to learn proposal features of arbitrary length action proposals. The TAPP method exploits the attention mechanism to focus on the discriminative part of proposals, suppressing background influence on proposal features. It constructs a temporal pyramid structure to convert arbitrary length proposal feature sequences to multiple fixed-length sequences while retaining the temporal information. In the TAPP method, we design a multi-scale temporal function and apply it to the temporal pyramid to generate final proposal features. Based on the TAPP method, we construct a temporal action proposal generation model and an action proposal classification model, and then we perform extensive experiments on two mainstream temporal action detection datasets for the temporal action proposal and temporal action detection tasks to verify our models. On the THUMOS'14 dataset, our models based on the TAPP significantly outperform the previous state-of-the-art methods for both tasks.

Original languageEnglish
Pages (from-to)3799-3810
Number of pages12
JournalIEEE Transactions on Multimedia
Volume25
DOIs
Publication statusPublished - 2023

Keywords

  • Action proposal representation
  • temporal action detection
  • temporal action proposal generation
  • untrimmed video analysis

Fingerprint

Dive into the research topics of 'Temporal Attention-Pyramid Pooling for Temporal Action Detection'. Together they form a unique fingerprint.

Cite this