Unsupervised learning of event AND-OR grammar and semantics from video

Zhangzhang Si*, Mingtao Pei, Benjamin Yao, Song Chun Zhu

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

68 引用 (Scopus)

摘要

We study the problem of automatically learning event AND-OR grammar from videos of a certain environment, e.g. an office where students conduct daily activities. We propose to learn the event grammar under the information projection and minimum description length principles in a coherent probabilistic framework, without manual supervision about what events happen and when they happen. Firstly a predefined set of unary and binary relations are detected for each video frame: e.g. agent's position, pose and interaction with environment. Then their co-occurrences are clustered into a dictionary of simple and transient atomic actions. Recursively these actions are grouped into longer and complexer events, resulting in a stochastic event grammar. By modeling time constraints of successive events, the learned grammar becomes context-sensitive. We introduce a new dataset of surveillance-style video in office, and present a prototype system for video analysis integrating bottom-up detection, grammatical learning and parsing. On this dataset, the learning algorithm is able to automatically discover important events and construct a stochastic grammar, which can be used to accurately parse newly observed video. The learned grammar can be used as a prior to improve the noisy bottom-up detection of atomic actions. It can also be used to infer semantics of the scene. In general, the event grammar is an efficient way for common knowledge acquisition from video.

源语言英语
主期刊名2011 International Conference on Computer Vision, ICCV 2011
41-48
页数8
DOI
出版状态已出版 - 2011
活动2011 IEEE International Conference on Computer Vision, ICCV 2011 - Barcelona, 西班牙
期限: 6 11月 201113 11月 2011

出版系列

姓名Proceedings of the IEEE International Conference on Computer Vision

会议

会议2011 IEEE International Conference on Computer Vision, ICCV 2011
国家/地区西班牙
Barcelona
时期6/11/1113/11/11

指纹

探究 'Unsupervised learning of event AND-OR grammar and semantics from video' 的科研主题。它们共同构成独一无二的指纹。

引用此