Learning from mobile contexts to minimize the mobile location search latency

Ling Yu Duan, Rongrong Ji*, Jie Chen, Hongxun Yao, Tiejun Huang, Wen Gao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

We propose to learn an extremely compact visual descriptor from the mobile contexts towards low bit rate mobile location search. Our scheme combines location related side information from the mobile devices to adaptively supervise the compact visual descriptor design in a flexible manner, which is very suitable to search locations or landmarks within a bandwidth constraint wireless link. Along with the proposed compact descriptor learning, a large-scale, contextual aware mobile visual search benchmark dataset PKUBench is also introduced, which serves as the first comprehensive benchmark for the quantitative evaluation of how the cheaply available mobile contexts can help the mobile visual search systems. Our proposed contextual learning based compact descriptor has shown to outperform the existing works in terms of compression rate and retrieval effectiveness.

Original languageEnglish
Pages (from-to)368-385
Number of pages18
JournalSignal Processing: Image Communication
Volume28
Issue number4
DOIs
Publication statusPublished - Apr 2013
Externally publishedYes

Keywords

  • Benchmark dataset
  • Compact visual descriptor
  • Contextual learning
  • Feature coding
  • Location recognition
  • Mobile visual search

Fingerprint

Dive into the research topics of 'Learning from mobile contexts to minimize the mobile location search latency'. Together they form a unique fingerprint.

Cite this