Rebuilding visual vocabulary via spatial-temporal context similarity for video retrieval

Lei Wang, Eyad Elyan, Dawei Song

科研成果: 书/报告/会议事项章节会议稿件同行评审

2 引用 (Scopus)
Plum Print visual indicator of research metrics
  • Citations
    • Citation Indexes: 2
  • Captures
    • Readers: 2
see details

摘要

The Bag-of-visual-Words (BovW) model is one of the most popular visual content representation methods for large-scale content-based video retrieval. The visual words are quantized according to a visual vocabulary, which is generated by a visual features clustering process (e.g. K-means, GMM, etc). In principle, two types of errors can occur in the quantization process. They are referred to as the UnderQuantize and OverQuantize problems. The former causes ambiguities and often leads to false visual content matches, while the latter generates synonyms and may lead to missing true matches. Unlike most state-of-the-art research that concentrated on enhancing the BovW model by disambiguating the visual words, in this paper, we aim to address the OverQuantize problem by incorporating the similarity of spatial-temporal contexts associated to pair-wise visual words. The visual words with similar context and appearance are assumed to be synonyms. These synonyms in the initial visual vocabulary are then merged to rebuild a more compact and descriptive vocabulary. Our approach was evaluated on the TRECVID2002 and CC-WEB-VIDEO datasets for two typical Query-By-Example (QBE) video retrieval applications. Experimental results demonstrated substantial improvements in retrieval performance over the initial visual vocabulary generated by the BovW model. We also show that our approach can be utilized in combination with the state-of-the-art disambiguation method to further improve the performance of the QBE video retrieval.

源语言英语
主期刊名MultiMedia Modeling - 20th Anniversary International Conference, MMM 2014, Proceedings
74-85
页数12
版本PART 1
DOI
出版状态已出版 - 2014
已对外发布
活动20th Anniversary International Conference on MultiMedia Modeling, MMM 2014 - Dublin, 爱尔兰
期限: 6 1月 201410 1月 2014

出版系列

姓名Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
编号PART 1
8325 LNCS
ISSN(印刷版)0302-9743
ISSN(电子版)1611-3349

会议

会议20th Anniversary International Conference on MultiMedia Modeling, MMM 2014
国家/地区爱尔兰
Dublin
时期6/01/1410/01/14

指纹

探究 'Rebuilding visual vocabulary via spatial-temporal context similarity for video retrieval' 的科研主题。它们共同构成独一无二的指纹。

引用此

Wang, L., Elyan, E., & Song, D. (2014). Rebuilding visual vocabulary via spatial-temporal context similarity for video retrieval. 在 MultiMedia Modeling - 20th Anniversary International Conference, MMM 2014, Proceedings (PART 1 编辑, 页码 74-85). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); 卷 8325 LNCS, 号码 PART 1). https://doi.org/10.1007/978-3-319-04114-8_7