Weighted two-step aggregated VLAD for image retrieval

  • Hao Liu*
  • , Qingjie Zhao
  • , Jimmy T. Mbelwa
  • , Song Tang
  • , Jianwei Zhang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

The vector of locally aggregated descriptor (VLAD) has been demonstrated to be efficient and effective in image retrieval and classification tasks. Due to the small-size codebook adopted by the method, the feature space division is coarse and the discriminative power is limited. Toward a discriminative and compact image representation for visual search, we develop a novel aggregating method to build VLAD, called two-step aggregated VLAD. Firstly, we propose the bidirectional quantization from both views of descriptors and visual words, for getting finer division of feature space. Secondly, we impose the probabilistic inverse document frequency to weight the local descriptors, for highlighting the discriminative ones. Experimental results on extensive datasets show that our method yields significant improvement and is competitive with the state-of-the-art methods.

Original languageEnglish
Pages (from-to)1783-1795
Number of pages13
JournalVisual Computer
Volume35
Issue number12
DOIs
Publication statusPublished - 1 Dec 2019

Keywords

  • Content-based image retrieval
  • Image representation
  • Local descriptors
  • VLAD

Fingerprint

Dive into the research topics of 'Weighted two-step aggregated VLAD for image retrieval'. Together they form a unique fingerprint.

Cite this