Joint view-identity manifold for target tracking and recognition

Jiulu Gong, Guoliang Fan*, Liangjiang Yu, Joseph P. Havlicek, Derong Chen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

8 Citations (Scopus)

Abstract

A new joint view-identity manifold (JVIM) is proposed for multiview shape modeling that is applied to automated target tracking and recognition (ATR). This work improves our recent work where the view and identity manifolds are assumed to be independent for multi-view multi-target modeling. A local linear Gaussian process latent variable model (LL-GPLVM) is used to learn a probabilistic JVIM which can capture both inter-class and intra-class variability of 2D target shapes under arbitrary view point jointly in one coexisted latent space. A particle filter-based ATR algorithm is developed to simultaneously infer the view and identity parameters along JVIM so that target tracking and recognition can be achieved jointly in a seamlessly fashion. The experimental results using SENSIAC ATR database demonstrate the advantages of our method both qualitatively and quantitatively compared with existing methods using template matching or separate view and identity manifolds.

Original languageEnglish
Title of host publication2012 IEEE International Conference on Image Processing, ICIP 2012 - Proceedings
Pages1357-1360
Number of pages4
DOIs
Publication statusPublished - 2012
Event2012 19th IEEE International Conference on Image Processing, ICIP 2012 - Lake Buena Vista, FL, United States
Duration: 30 Sept 20123 Oct 2012

Publication series

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880

Conference

Conference2012 19th IEEE International Conference on Image Processing, ICIP 2012
Country/TerritoryUnited States
CityLake Buena Vista, FL
Period30/09/123/10/12

Fingerprint

Dive into the research topics of 'Joint view-identity manifold for target tracking and recognition'. Together they form a unique fingerprint.

Cite this