Multi-view action synchronization in complex background

Longfei Zhang, Shuo Tang, Shikha Singhal, Gangyi Ding

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

This paper addresses temporal synchronization of human actions under multiple view situation. Many researchers focused on frame by frame alignment for sync these multi-view videos, and expolited features such as interesting point trajectory or 3d human motion feature for event detecting individual. However, since background are complex and dynamic in real world, traditional image-based features are not fit for video representation. We explore the approach by using robust spatio-temporal features and self-similarity matrices to represent actions across views. Multiple sequences can be aligned their temporal patch(Sliding window) using the Dynamic Time Warping algorithm hierarchically and measured by meta-action classifiers. Two datasets including the Pump and the Olympic dataset are used as test cases. The methods are showed the effectiveness in experiment and suited general video event dataset.

源语言英语
主期刊名MultiMedia Modeling - 20th Anniversary International Conference, MMM 2014, Proceedings
151-160
页数10
版本PART 2
DOI
出版状态已出版 - 2014
活动20th Anniversary International Conference on MultiMedia Modeling, MMM 2014 - Dublin, 爱尔兰
期限: 6 1月 201410 1月 2014

出版系列

姓名Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
编号PART 2
8326 LNCS
ISSN(印刷版)0302-9743
ISSN(电子版)1611-3349

会议

会议20th Anniversary International Conference on MultiMedia Modeling, MMM 2014
国家/地区爱尔兰
Dublin
时期6/01/1410/01/14

指纹

探究 'Multi-view action synchronization in complex background' 的科研主题。它们共同构成独一无二的指纹。

引用此