Multi-group-multi-class domain adaptation for event recognition

Yang Feng, Xinxiao Wu*, Yunde Jia

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this study, the authors propose a multi-group-multi-class domain adaptation framework to recognise events in consumer videos by leveraging a large number of web videos. The authors' framework is extended from multi-class support vector machine by adding a novel data-dependent regulariser, which can force the event classifier to become consistent in consumer videos. To obtain web videos, they search them using several event-related keywords and refer the videos returned by one keyword search as a group. They also leverage a video representation which is the average of convolutional neural networks features of the video frames for better performance. Comprehensive experiments on the two real-world consumer video datasets demonstrate the effectiveness of their method for event recognition in consumer videos.

Original languageEnglish
Pages (from-to)60-66
Number of pages7
JournalIET Computer Vision
Volume10
Issue number1
DOIs
Publication statusPublished - 1 Feb 2016

Fingerprint

Dive into the research topics of 'Multi-group-multi-class domain adaptation for event recognition'. Together they form a unique fingerprint.

Cite this