VideoGraph: a non-linear video representation for efficient exploration

Lei Zhang, Qian Kun Xu, Lei Zheng Nie, Hua Huang*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

12 引用 (Scopus)

摘要

In this paper we introduce VideoGraph, a novel non-linear representation for scene structure of a video. Unlike classical linear sequential organization, VideoGraph concentrates the video content across the time line by structuring scenes and materializes with two-dimensional graph, which enables non-linear exploration on the scenes and their transitions. To construct VideoGraph, we adopt a sub-shot induced method to evaluate the spatio-temporal similarity between shot segments of video. Then, scene structure is derived by grouping similar shots and identifying the valid transitions between scenes. The final stage is to represent the scene structure using a graph with respect to scene transition topology. Our VideoGraph can provide a condensed representation in the scene level and facilitate a non-linear manner to browse videos. Experimental results are presented to demonstrate the effectiveness and efficiency by using VideoGraph to explore and access the video content.

源语言英语
页(从-至)1123-1132
页数10
期刊Visual Computer
30
10
DOI
出版状态已出版 - 1 10月 2014

指纹

探究 'VideoGraph: a non-linear video representation for efficient exploration' 的科研主题。它们共同构成独一无二的指纹。

引用此