Motion-State-Adaptive Video Summarization via Spatiotemporal Analysis

Yunzuo Zhang, Ran Tao*, Yue Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

47 Citations (Scopus)

Abstract

With the explosive growth of video data, managing and browsing videos in a timely and effective manner has become an urgent problem, particularly in surveillance applications. Video summarization as a feasible solution is considerably attracting more attention. In this paper, we propose a novel motion-state-Adaptive video summarization method based on spatiotemporal analysis. To overcome the low efficiency of traditional video summarization, the proposed method utilizes spatiotemporal slices to analyze object motion trajectories and selects motion state changes as a metric to summarize videos. Initially, a motion-Active segment is detected using motion power. Subsequently, motion state changes are modeled as a collinear segment on a spatiotemporal slice (STS-CS) and an attention curve based on the STS-CS model is formed to extract the key frames. Finally, a visually distinguishing mechanism is employed to refine the key frames. The experimental results demonstrate that the proposed method outperforms the existing state-of-The-Art methods in terms of both computational efficiency and detailed video motion dynamic maintenance. This is accomplished with a comparable subjective performance.

Original languageEnglish
Article number7428932
Pages (from-to)1340-1352
Number of pages13
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume27
Issue number6
DOIs
Publication statusPublished - Jun 2017

Keywords

  • Motion state adaptive
  • spatiotemporal analysis
  • video summarization

Fingerprint

Dive into the research topics of 'Motion-State-Adaptive Video Summarization via Spatiotemporal Analysis'. Together they form a unique fingerprint.

Cite this