Annotating videos from the web images

Han Wang*, Xinxiao Wu, Yunde Jia

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Citations (Scopus)

Abstract

In this paper, we propose a generic framework for annotating videos based on web images. To greatly reduce expensive human annotation on tremendous quantity of videos, it is necessary to transfer the knowledge learned from web images with a rich source of information to videos. A discriminative structural model is proposed to transfer knowledge from web images (auxiliary domain) to the video (target domain) by jointly modeling the interaction between video labels and we-b image attributes. The advantage of our framework is that it allows us to infer video labels using the information from different domains, i.e. The video itself and image attributes. Experimental results on UCF Sports Action Dataset demonstrates that it is effective to use knowledge gained from web images for video annotation.

Original languageEnglish
Title of host publicationICPR 2012 - 21st International Conference on Pattern Recognition
Pages2801-2804
Number of pages4
Publication statusPublished - 2012
Event21st International Conference on Pattern Recognition, ICPR 2012 - Tsukuba, Japan
Duration: 11 Nov 201215 Nov 2012

Publication series

NameProceedings - International Conference on Pattern Recognition
ISSN (Print)1051-4651

Conference

Conference21st International Conference on Pattern Recognition, ICPR 2012
Country/TerritoryJapan
CityTsukuba
Period11/11/1215/11/12

Fingerprint

Dive into the research topics of 'Annotating videos from the web images'. Together they form a unique fingerprint.

Cite this