Learning Mappings for synthesis from near infrared to light images

Jie Chen*, Dong Yi, Jimei Yang, Guoying Zhao, Stan Z. Li, Mattim Pietikäinen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

118 Citations (Scopus)

Abstract

This paper deals with a new problem in face recognition research, in which the enrollment and query face samples are captured under different lighting conditions. in our case, the enrollment samples are visual light (VIS) images, whereas the query samples are taken under near infrared (NIR) condition. It is very difficult to directly match the face samples captured under these two lighting conditions due to their different visual appearances. in this paper, we propose a novel method for synthesizing VIS images from NIR images based on learning the mappings between images of different spectra (i.e., NIR and VIS). in our approach, we reduce the inter-spectral differences significantly, thus allowing effective matching between faces taken under different imaging conditions. Face recognition experiments clearly show the efficacy of the proposed approach.

Original languageEnglish
Title of host publication2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009
PublisherIEEE Computer Society
Pages156-163
Number of pages8
ISBN (Print)9781424439935
DOIs
Publication statusPublished - 2009
Externally publishedYes
Event2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009 - Miami, FL, United States
Duration: 20 Jun 200925 Jun 2009

Publication series

Name2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009

Conference

Conference2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009
Country/TerritoryUnited States
CityMiami, FL
Period20/06/0925/06/09

Fingerprint

Dive into the research topics of 'Learning Mappings for synthesis from near infrared to light images'. Together they form a unique fingerprint.

Cite this