Learning Highlight Separation of Real High Resolution Portrait Image

Ruikang Ju, Dongdong Weng*, Bin Liang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)

Abstract

This work presents an approach for highlight separation of real high resolution portrait image. In order to obtain reliable ground truth of real images, a controllable portrait image collection system with 156 groups of light sources has been built. It has 4 cameras to collect the portrait images of 36 subjects from different angles, and then we use 4 data processing strategies on these images to obtain 4 training datasets. Based on these datasets, 4 U-Net networks are trained by using a single image as input. To test and evaluate, we input the 2560∗2560 resolution images into 4 models, and finally determine the best data processing strategy and trained network. Our method creates precise and believable highlight separation results for 2560∗2560 high resolution images, including when the subject is not looking straight at the camera.

Original languageEnglish
Title of host publicationICCCV 2021 - Proceedings of the 4th International Conference on Control and Computer Vision
PublisherAssociation for Computing Machinery
Pages18-23
Number of pages6
ISBN (Electronic)9781450390477
DOIs
Publication statusPublished - 13 Aug 2021
Event4th International Conference on Control and Computer Vision, ICCCV 2021 - Virtual, Online, China
Duration: 13 Aug 202115 Aug 2021

Publication series

NameACM International Conference Proceeding Series

Conference

Conference4th International Conference on Control and Computer Vision, ICCCV 2021
Country/TerritoryChina
CityVirtual, Online
Period13/08/2115/08/21

Keywords

  • Highlight separation
  • Image collection system
  • Neural network
  • Real image dataset

Fingerprint

Dive into the research topics of 'Learning Highlight Separation of Real High Resolution Portrait Image'. Together they form a unique fingerprint.

Cite this