摘要
In this study, the authors propose a multi-group-multi-class domain adaptation framework to recognise events in consumer videos by leveraging a large number of web videos. The authors' framework is extended from multi-class support vector machine by adding a novel data-dependent regulariser, which can force the event classifier to become consistent in consumer videos. To obtain web videos, they search them using several event-related keywords and refer the videos returned by one keyword search as a group. They also leverage a video representation which is the average of convolutional neural networks features of the video frames for better performance. Comprehensive experiments on the two real-world consumer video datasets demonstrate the effectiveness of their method for event recognition in consumer videos.
源语言 | 英语 |
---|---|
页(从-至) | 60-66 |
页数 | 7 |
期刊 | IET Computer Vision |
卷 | 10 |
期 | 1 |
DOI | |
出版状态 | 已出版 - 1 2月 2016 |