A weakly supervised method for makeup-invariant face verification

Yao Sun, Lejian Ren, Zhen Wei, Bin Liu, Yanlong Zhai, Si Liu*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

27 引用 (Scopus)

摘要

Face verification, which aims to determine whether two face images belong to the same identity, is an important task in multimedia area. Face verification becomes more challenging when the person is wearing makeup. However, collecting sufficient makeup and non-makeup image pairs are tedious, which brings great challenges for deep learning methods of face verification. In this paper, we propose a new weakly supervised method for face verification. Our method takes advantages of the plentiful video resources available from the Internet. Our face verification model is pre-trained on the free videos and fine-tuned on small makeup and non-makeup datasets. To fully exploit the video contexts and the limited makeup and non-makeup datasets, many techniques are used to improve the performance. A novel loss function with a triplet term and two pairwise terms is defined, and multiple facial parts are combined by the proposed voting strategy to generate better verification results. Experiments on a benchmark dataset (Guo et al., 2014) [1] and a newly collected face dataset show the priority of the proposed method.

源语言英语
页(从-至)153-159
页数7
期刊Pattern Recognition
66
DOI
出版状态已出版 - 1 6月 2017

指纹

探究 'A weakly supervised method for makeup-invariant face verification' 的科研主题。它们共同构成独一无二的指纹。

引用此