LoDAvatar: hierarchical embedding and selective detail enhancement for adaptive levels of detail Gaussian avatars

  • Xiaonuo Dongye
  • , Hanzhi Guo
  • , Le Luo*
  • , Haiyan Jiang
  • , Yihua Bao
  • , Jie Guo
  • , Zeyu Tian
  • , Dongdong Weng*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

With the advancement of virtual reality, the demand for 3D human avatars is increasing. The emergence of Gaussian Splatting technology has enabled the rendering of Gaussian avatars with superior visual quality and reduced computational costs. Despite numerous methods researchers propose for implementing drivable Gaussian avatars, limited attention has been given to balancing visual quality and computational costs. In this paper, we introduce LoDAvatar, a method that introduces levels of detail into Gaussian avatars through hierarchical embedding and selective detail enhancement methods. The key steps of LoDAvatar encompass data preparation, Gaussian embedding, Gaussian optimization, and selective detail enhancement. We conducted experiments involving Gaussian avatars at various detail levels, employing objective assessments and subjective evaluations. The outcomes indicate that incorporating levels of detail into Gaussian avatars can decrease computational costs during rendering while upholding commendable visual quality, thereby enhancing runtime frame rates. We advocate adopting LoDAvatar to render multiple dynamic Gaussian avatars or extensive Gaussian scenes, balancing visual quality and computational costs.

Original languageEnglish
Article number178
JournalVirtual Reality
Volume29
Issue number4
DOIs
Publication statusPublished - Dec 2025

Keywords

  • Gaussian splatting
  • Hierarchical embedding
  • Levels of detail
  • Selective detail enhancement

Fingerprint

Dive into the research topics of 'LoDAvatar: hierarchical embedding and selective detail enhancement for adaptive levels of detail Gaussian avatars'. Together they form a unique fingerprint.

Cite this