Robust scene matching method based on sparse representation and iterative correction

Sai Yang, Bo Xiao, Liping Yan*, Yuanqing Xia, Mengyin Fu, Yang Liu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

This article presents an efficient scene matching method robust to noise and occlusion. The method combines a coarse matching method with a fine matching method by iterative correction. Both coarse matching method and fine matching method, inspired by sparse representation for face recognition, are resistant to noise and occlusion inherently. In each step of iterative matching, the result of coarse matching is introduced into fine matching as prior knowledge, which gives a rough range about the possible positions. Then, the fine matching finds the most reasonable result based on the rough range given by coarse matching. Finally, the result of fine matching is brought back to coarse matching as post knowledge to correct it. Experiments demonstrate that the robustness to noise and occlusion is improved compared with the matching methods without iterative correction.

Original languageEnglish
Pages (from-to)115-123
Number of pages9
JournalImage and Vision Computing
Volume60
DOIs
Publication statusPublished - 1 Apr 2017

Keywords

  • Iterative correction
  • Scene matching
  • Sparse representation

Fingerprint

Dive into the research topics of 'Robust scene matching method based on sparse representation and iterative correction'. Together they form a unique fingerprint.

Cite this