Blind face restoration: Benchmark datasets and a baseline model

Puyang Zhang, Kaihao Zhang, Wenhan Luo, Changsheng Li*, Guoren Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Blind Face Restoration (BFR) aims to generate high-quality face images from low-quality inputs. However, existing BFR methods often use private datasets for training and evaluation, making it challenging for future approaches to compare fairly. To address this issue, we introduce two benchmark datasets, BFRBD128 and BFRBD512, for evaluating state-of-the-art methods in five scenarios: blur, noise, low resolution, JPEG compression artifacts, and full degradation. We use seven standard quantitative metrics and two task-specific metrics, AFLD and AFICS. Additionally, we propose an efficient baseline model called Swin Transformer U-Net (STUNet), which outperforms state-of-the-art methods in various BFR tasks. The codes, datasets, and trained models are publicly available at: https://github.com/bitzpy/Blind-Face-Restoration-Benchmark-Datasets-and-a-Baseline-Model.

Original languageEnglish
Article number127271
JournalNeurocomputing
Volume574
DOIs
Publication statusPublished - 14 Mar 2024

Keywords

  • Benchmark datasets
  • Blind face restoration
  • Comprehensive evaluation
  • Transformer network

Fingerprint

Dive into the research topics of 'Blind face restoration: Benchmark datasets and a baseline model'. Together they form a unique fingerprint.

Cite this