Multi-group-multi-class domain adaptation for event recognition

Yang Feng, Xinxiao Wu*, Yunde Jia

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

In this study, the authors propose a multi-group-multi-class domain adaptation framework to recognise events in consumer videos by leveraging a large number of web videos. The authors' framework is extended from multi-class support vector machine by adding a novel data-dependent regulariser, which can force the event classifier to become consistent in consumer videos. To obtain web videos, they search them using several event-related keywords and refer the videos returned by one keyword search as a group. They also leverage a video representation which is the average of convolutional neural networks features of the video frames for better performance. Comprehensive experiments on the two real-world consumer video datasets demonstrate the effectiveness of their method for event recognition in consumer videos.

源语言英语
页(从-至)60-66
页数7
期刊IET Computer Vision
10
1
DOI
出版状态已出版 - 1 2月 2016

指纹

探究 'Multi-group-multi-class domain adaptation for event recognition' 的科研主题。它们共同构成独一无二的指纹。

引用此