A Translucency Image Editing Method Based On StyleGAN

Mingyuan Zhang, Hongsong Li*, Shengyao Wang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In the field of image-based material editing, while most studies focus on opaque materials, editing translucent materials remains a challenge. In this paper, we propose a method for editing the translucency for single input image, utilizing Style-based Generator Architecture for Generative Adversarial Networks (StyleGAN) with pixel2style2pixel (pSp) encoder. We propose a T-space, which is derived by autoencoders that map the latent-space of the StyleGAN into this more meaningful latent space for translucency editing. With this T-space, we train a group of multi-layer perceptions (MLPs) to obtain the directional change vectors of three chosen parameters of BRDF and BSSRDF models, which enable varying translucency level of the object from the input image in three different fashions. Experimental results demonstrate that our approach achieves effective translucency editing in both rendered and captured images.

Original languageEnglish
Title of host publicationEighth International Conference on Computer Graphics and Virtuality, ICCGV 2025
EditorsHaiquan Zhao
PublisherSPIE
ISBN (Electronic)9781510689213
DOIs
Publication statusPublished - 2025
Event8th International Conference on Computer Graphics and Virtuality, ICCGV 2025 - Chengdu, China
Duration: 21 Feb 202523 Feb 2025

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume13557
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

Conference8th International Conference on Computer Graphics and Virtuality, ICCGV 2025
Country/TerritoryChina
CityChengdu
Period21/02/2523/02/25

Keywords

  • Feature extraction
  • Generative Adversarial Networks
  • Image-based material editing
  • Translucency

Fingerprint

Dive into the research topics of 'A Translucency Image Editing Method Based On StyleGAN'. Together they form a unique fingerprint.

Cite this