TY - JOUR
T1 - Motion-State-Adaptive Video Summarization via Spatiotemporal Analysis
AU - Zhang, Yunzuo
AU - Tao, Ran
AU - Wang, Yue
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2017/6
Y1 - 2017/6
N2 - With the explosive growth of video data, managing and browsing videos in a timely and effective manner has become an urgent problem, particularly in surveillance applications. Video summarization as a feasible solution is considerably attracting more attention. In this paper, we propose a novel motion-state-Adaptive video summarization method based on spatiotemporal analysis. To overcome the low efficiency of traditional video summarization, the proposed method utilizes spatiotemporal slices to analyze object motion trajectories and selects motion state changes as a metric to summarize videos. Initially, a motion-Active segment is detected using motion power. Subsequently, motion state changes are modeled as a collinear segment on a spatiotemporal slice (STS-CS) and an attention curve based on the STS-CS model is formed to extract the key frames. Finally, a visually distinguishing mechanism is employed to refine the key frames. The experimental results demonstrate that the proposed method outperforms the existing state-of-The-Art methods in terms of both computational efficiency and detailed video motion dynamic maintenance. This is accomplished with a comparable subjective performance.
AB - With the explosive growth of video data, managing and browsing videos in a timely and effective manner has become an urgent problem, particularly in surveillance applications. Video summarization as a feasible solution is considerably attracting more attention. In this paper, we propose a novel motion-state-Adaptive video summarization method based on spatiotemporal analysis. To overcome the low efficiency of traditional video summarization, the proposed method utilizes spatiotemporal slices to analyze object motion trajectories and selects motion state changes as a metric to summarize videos. Initially, a motion-Active segment is detected using motion power. Subsequently, motion state changes are modeled as a collinear segment on a spatiotemporal slice (STS-CS) and an attention curve based on the STS-CS model is formed to extract the key frames. Finally, a visually distinguishing mechanism is employed to refine the key frames. The experimental results demonstrate that the proposed method outperforms the existing state-of-The-Art methods in terms of both computational efficiency and detailed video motion dynamic maintenance. This is accomplished with a comparable subjective performance.
KW - Motion state adaptive
KW - spatiotemporal analysis
KW - video summarization
UR - http://www.scopus.com/inward/record.url?scp=85020254266&partnerID=8YFLogxK
U2 - 10.1109/TCSVT.2016.2539638
DO - 10.1109/TCSVT.2016.2539638
M3 - Article
AN - SCOPUS:85020254266
SN - 1051-8215
VL - 27
SP - 1340
EP - 1352
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
IS - 6
M1 - 7428932
ER -