VideoGraph: a non-linear video representation for efficient exploration

Lei Zhang, Qian Kun Xu, Lei Zheng Nie, Hua Huang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

In this paper we introduce VideoGraph, a novel non-linear representation for scene structure of a video. Unlike classical linear sequential organization, VideoGraph concentrates the video content across the time line by structuring scenes and materializes with two-dimensional graph, which enables non-linear exploration on the scenes and their transitions. To construct VideoGraph, we adopt a sub-shot induced method to evaluate the spatio-temporal similarity between shot segments of video. Then, scene structure is derived by grouping similar shots and identifying the valid transitions between scenes. The final stage is to represent the scene structure using a graph with respect to scene transition topology. Our VideoGraph can provide a condensed representation in the scene level and facilitate a non-linear manner to browse videos. Experimental results are presented to demonstrate the effectiveness and efficiency by using VideoGraph to explore and access the video content.

Original languageEnglish
Pages (from-to)1123-1132
Number of pages10
JournalVisual Computer
Volume30
Issue number10
DOIs
Publication statusPublished - 1 Oct 2014

Keywords

  • Exploration
  • Sub-shot
  • Video scene

Fingerprint

Dive into the research topics of 'VideoGraph: a non-linear video representation for efficient exploration'. Together they form a unique fingerprint.

Cite this