Effective visual tracking by pairwise metric learning

Chenwei Deng*, Baoxian Wang, Weisi Lin, Guang Bin Huang, Baojun Zhao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

For robust visual tracking, appearance modeling should be able to well separate the object from its backgrounds, while accurately adapt to its appearance variations. However, most of the existing tracking methods mainly focus on one of the two aspects; or design two different modules to combine them with the price of double computational cost. In this paper, by using pairwise metric learning, we present a novel appearance model for robust visual tracking. Specifically, visual tracking is viewed as a pairwise regression problem, and extreme learning machine (ELM) is utilized to construct the pairwise regression framework. In ELM-based pairwise training, two constraints are enforced: the target observations must have different regression outputs from those background ones; while the various target observations during tracking should have approximate regression outputs. Thus, the discriminative and generative capabilities are fully considered in a single object tracking model. Moreover, online sequential ELM (OS-ELM) is used to update the resulting appearance model, thereby leading to a more robust tracking process. Extensive experimental evaluations on challenging video sequences demonstrate the effectiveness and efficiency of the proposed tracker.

Original languageEnglish
Pages (from-to)266-275
Number of pages10
JournalNeurocomputing
Volume261
DOIs
Publication statusPublished - 25 Oct 2017

Keywords

  • Appearance modeling
  • Extreme learning machine
  • Online sequential updating
  • Pairwise metric learning
  • Robust visual tracking

Fingerprint

Dive into the research topics of 'Effective visual tracking by pairwise metric learning'. Together they form a unique fingerprint.

Cite this