Skip to content

DriveLaW: Unifying Planning and Video Generation in a Latent Driving World

License

Notifications You must be signed in to change notification settings

wm-research/DriveLaW

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DriveLaW: Unifying Planning and Video Generation in a Latent Driving World

Tianze Xia1,2*, Yongkang Li1,2*, Lijun Zhou2*, Jingfeng Yao1, Kaixin Xiong2, Haiyang Sun2†, Bing Wang2,
Kun Ma2, Guang Chen2, Hangjun Ye2, Wenyu Liu1, Xinggang Wang1,✉

1 Huazhong University of Science and Technology 2 Xiaomi EV

(*) Equal contribution. (†) Project leader. (✉)Corresponding Author.

Paper PDF Project Page

Abstract

World models have become crucial for autonomous driving, as they learn how scenarios evolve over time to address the long-tail challenges of the real world. However, current approaches relegate world models to limited roles: they operate within ostensibly unified architectures that still keep world prediction and motion planning as decoupled processes. To bridge this gap, we propose DriveLaW, a novel paradigm that unifies video generation and motion planning. By directly injecting the latent representation from its video generator into the planner, DriveLaW ensures inherent consistency between high-fidelity future generation and reliable trajectory planning. Specifically, DriveLaW consists of two core components: DriveLaW-Video, our powerful world model that generates high-fidelity forecasting with expressive latent representations, and DriveLaW-Act, a diffusion planner that generates consistent and reliable trajectories from the latent of DriveLaW-Video, with both components optimized by a three-stage progressive training strategy. The power of our unified paradigm is demonstrated by new state-of-the-art results across both tasks. DriveLaW not only advances video prediction significantly, surpassing best-performing work by 33.3% in FID and 1.8% in FVD, but also achieves a new record on the NAVSIM planning benchmark.

Overview

News

[2025/12/30] ArXiv paper release. Models/Code are coming soon. Please stay tuned! ☕️

Updates

  • Release Paper
  • Release Full Models
  • Release Inference Framework
  • Release Training Framework

Citation

If you find DriveLaW is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@article{xia2025drivelaw,
  title={DriveLaW: Unifying Planning and Video Generation in a Latent Driving World},
  author={Xia, Tianze and Li, Yongkang and Zhou, Lijun and Yao, Jingfeng and Xiong, Kaixin and Sun, Haiyang and Wang, Bing and Ma, Kun and Ye, Hangjun and Liu, Wenyu and others},
  journal={arXiv preprint arXiv:2512.23421},
  year={2025}
}

About

DriveLaW: Unifying Planning and Video Generation in a Latent Driving World

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published