Action Shuffling for Weakly Supervised Temporal Localization

Xiao Yu Zhang, Haichao Shi, Changsheng Li, Xinchu Shi*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

Weakly supervised action localization is a challenging task with extensive applications, which aims to identify actions and the corresponding temporal intervals with only video-level annotations available. This paper analyzes the order-sensitive and location-insensitive properties of actions, and embodies them into a self-augmented learning framework to improve the weakly supervised action localization performance. To be specific, we propose a novel two-branch network architecture with intra/inter-action shuffling, referred to as ActShufNet. The intra-action shuffling branch lays out a self-supervised order prediction task to augment the video representation with inner-video relevance, whereas the inter-action shuffling branch imposes a reorganizing strategy on the existing action contents to augment the training set without resorting to any external resources. Furthermore, the global-local adversarial training is presented to enhance the model's robustness to irrelevant noises. Extensive experiments are conducted on three benchmark datasets, and the results clearly demonstrate the efficacy of the proposed method.

Original languageEnglish
Pages (from-to)4447-4457
Number of pages11
JournalIEEE Transactions on Image Processing
Volume31
DOIs
Publication statusPublished - 2022

Keywords

  • Temporal action localization
  • inter-action
  • intra-action
  • self-supervised

Fingerprint

Dive into the research topics of 'Action Shuffling for Weakly Supervised Temporal Localization'. Together they form a unique fingerprint.

Cite this