A computable visual attention model for video skimming

Longfei Zhang*, Yuanda Cao, Gangyi Ding, Yong Wang

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

17 引用 (Scopus)

摘要

A novel computable visual attention model (VAM) for video skimming algorithm is proposed. Videos bear more motion features than images do. Objects in videos cause different attention effects, depending on various situations, positions, motions, and appearances. The static visual attention model is based on spatial distribution, visual object, or both, but fall short in solving temporal attention effects. The proposed VAM model adopts the alive-time(AT) of a visual object as a new descriptor to improve the accuracy of locating highlight in a video clip, then produces better video skimming results. The model is represented by a set of descriptors to be computable and provide a generic framework for video analysis. The temporal variations of attention value in a video clip are weighted by non-linear Chi-square distribution. Then the highlights of the frames in thevideo are represented by the attention window (AW) and the attention values of the visual objects (AOs) are tracked and used to generate the attention curve of the video. At last, a video skimming strategy is used to select the highlights of the video by analyzing the attention curve. The experiment result shows that the proposed model makes the skimming results 15%~25% shorter than previous methods.

源语言英语
主期刊名Proceedings - 10th IEEE International Symposium on Multimedia, ISM 2008
667-672
页数6
DOI
出版状态已出版 - 2008
活动10th IEEE International Symposium on Multimedia, ISM 2008 - Berkeley, CA, 美国
期限: 15 12月 200817 12月 2008

出版系列

姓名Proceedings - 10th IEEE International Symposium on Multimedia, ISM 2008

会议

会议10th IEEE International Symposium on Multimedia, ISM 2008
国家/地区美国
Berkeley, CA
时期15/12/0817/12/08

指纹

探究 'A computable visual attention model for video skimming' 的科研主题。它们共同构成独一无二的指纹。

引用此