DriveDreamer4D: World Models Are Effective Data Machines for 4D Driving Scene Representation

  • Guosheng Zhao
  • , Chaojun Ni
  • , Xiaofeng Wang
  • , Zheng Zhu*
  • , Xueyang Zhang
  • , Yida Wang
  • , Guan Huang
  • , Xinze Chen
  • , Boyuan Wang
  • , Youyi Zhang
  • , Wenjun Mei
  • , Xingang Wang*
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Closed-loop simulation is essential for advancing end-to-end autonomous driving systems. Contemporary sensor simulation methods, such as NeRF and 3DGS, rely predominantly on conditions closely aligned with training data distributions, which are largely confined to forward-driving scenarios. Consequently, these methods face limitations when rendering complex maneuvers (e.g., lane change, acceleration, deceleration). Recent advancements in autonomous-driving world models have demonstrated the potential to generate diverse driving videos. However, these approaches remain constrained to 2D video generation, inherently lacking the spatiotemporal coherence required to capture intricacies of dynamic driving environments. In this paper, we introduce DriveDreamer4D, which enhances 4D driving scene representation leveraging world model priors. Specifically, we utilize the world model as a data machine to synthesize novel trajectory videos, where structured conditions are explicitly leveraged to control the spatial-temporal consistency of traffic elements. Besides, the cousin data training strategy is proposed to facilitate merging real and synthetic data for optimizing 4DGS. To our knowledge, DriveDreamer4D is the first to utilize video generation models for improving 4D reconstruction in driving scenarios. Experimental results reveal that DriveDreamer4D significantly enhances generation quality under novel trajectory views, achieving a relative improvement in FID by 32.1%, 46.4%, and 16.3% compared to PVG, S3Gaussian, and Deformable-GS. Moreover, DriveDreamer4D markedly enhances the spatiotemporal coherence of driving agents, which is verified by a comprehensive user study and the relative increases of 22.6%, 43.5%, and 15.6% in the NTA-IoU metric.

Original languageEnglish
Pages (from-to)12015-12026
Number of pages12
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
Publication statusPublished - 2025
Externally publishedYes
Event2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025 - Nashville, United States
Duration: 11 Jun 202515 Jun 2025

Keywords

  • autonomous driving
  • scene reconstruction
  • video generation

Fingerprint

Dive into the research topics of 'DriveDreamer4D: World Models Are Effective Data Machines for 4D Driving Scene Representation'. Together they form a unique fingerprint.

Cite this