Abstract
Face attributes prediction has an increasing amount of applications in human-computer interaction, face verification and video surveillance. Various studies show that dependencies exist in face attributes. Multi-task learning architecture can build a synergy among the correlated tasks by parameter sharing in the shared layers. However, the dependencies between the tasks have been ignored in the task-specific layers of most multi-task learning architectures. Thus, how to further boost the performance of individual tasks by using task dependencies among face attributes is quite challenging. In this paper, we propose a multi-task learning using task dependencies architecture for face attributes prediction and evaluate the performance with the tasks of smile and gender prediction. The designed attention modules in task-specific layers of our proposed architecture are used for learning task-dependent disentangled representations. The experimental results demonstrate the effectiveness of our proposed network by comparing with the traditional multi-task learning architecture and the state-of-the-art methods on Faces of the world (FotW) and Labeled faces in the wild-a (LFWA) datasets.
Original language | English |
---|---|
Article number | 535 |
Journal | Applied Sciences (Switzerland) |
Volume | 9 |
Issue number | 12 |
DOIs | |
Publication status | Published - 1 Jun 2019 |
Keywords
- Attention
- Deep convolutional neural network
- Face attributes prediction
- Multi-task learning
- Task dependencies