Zhenwei Wang

Zhenwei (Zavier) Wang

Researcher
Tencent Hunyuan zwwang96[at]gmail.com

Research Interests:

  • World Model
  • 3D Reconstruction
  • Video Generation
  • 3D Asset Generation

About Me

I am a researcher at Tencent Hunyuan, working in the world model team. I received my Ph.D. degree from the Department of Computer Science, City University of Hong Kong (CityU), under the supervision of Prof. Rynson W.H. Lau and Prof. Gerhard Hancke. I received my B.Eng. degree from Xiamen University.

During my Ph.D. studies, I was fortunate to work with Tengfei Wang and Ziwei Liu from Shanghai AI Lab and MMLab@NTU. I was also lucky to closely collaborate with Nanxuan Zhao from Adobe Research.

I'm interested in generative AI, including world models, video generation, 3D reconstruction, and 3D asset generation.

News

  • [Aug. 2025] Three papers accepted to SIGGRAPH Asia 2025. See you in Hong Kong!
  • [Aug. 2025] HunyuanWorld 1.0 is released. Check it out!
  • [Feb. 2025] One paper accepted to CVPR 2025.
  • [Feb. 2025] One paper accepted to ICLR 2025. See you in Singapore!
  • [Apr. 2024] One paper accepted to SIGGRAPH 2024. See you in Denver!
  • [Dec. 2023] One paper accepted to AAAI 2024.
  • [Apr. 2023] One paper accepted to SIGGRAPH 2023 (journal track).

Experience

  • Researcher, Tencent Hunyuan3D, World Model Team
    08.2025 - Present
  • Research Intern, Tencent Hunyuan3D, World Model Team
    04.2025 - 07.2025
  • Visiting Scholar, Harvard University, VCG Group
    10.2024 - 02.2025
  • Research Intern, Shanghai AI Laboratory, 3D AIGC Team
    09.2023 - 10.2024

Open-Source Projects

Technical Report

HunyuanWorld 1.0: Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels

Core Contributor

Tencent Hunyuan3D, July 2025

Immersive and editable 3D scene generation from images or texts.

Technical Report

HY-World 1.5: A Systematic Framework for Interactive World Modeling with Real-Time Latency and Geometric Consistency

Core Contributor

Tencent Hunyuan3D, December 2025

The first Open-Source world model with real-time latency, long-term memory, interactive control, and long-horizon generation.

Selected Publications

(*equal contribution, ^intern, †corresponding author)

Preprint

WorldPlay: Towards Long-Term Geometric Consistency for Real-Time Interactive World Modeling

Wenqiang Sun*^, Haiyu Zhang*^, Haoyuan Wang*, Junta Wu, Zehang Wang^, Zhenwei Wang, Yunhong Wang, Jun Zhang, Tengfei Wang†, Chunchao Guo

preprint, December 2025

Real-time interactive world model with long-term memory.

Preprint

WorldMirror: Universal 3D World Reconstruction with Any-Prior Prompting

Yifan Liu*^, Zhiyuan Min*^, Zhenwei Wang*, Junta Wu, Tengfei Wang†, Yixuan Yuan, Yawei Luo, Chunchao Guo

preprint, October 2025

Universal and feed-forward 3D reconstruction with any input and any output.

Preprint

MoCA: Mixture-of-Components Attention for Scalable Compositional 3D Generation

Zhiqi Li^, Wenhuan Li^, Tengfei Wang†, Zhenwei Wang, Junta Wu, Haoyuan Wang, Yunhan Yang, Zehuan Huang, Yang Li, Peidong Liu, Chunchao Guo

preprint, December 2025

Generating compositional 3D scenes and objectswith sparse mixture-of-components attention.

SIGGRAPH Asia 2025

StyleSculptor: Zero-Shot Style-Controllable 3D Asset Generation with Texture-Geometry Dual Guidance

Zefan Qu, Zhenwei Wang, Haoyuan Wang, Ke Xu, Gerhard Hancke, Rynson W.H. Lau

Proc. ACM SIGGRAPH Asia, December 2025

Joint geometry and texture style-guided 3D asset generation in a training-free manner.

SIGGRAPH Asia 2025

Voyager: Long-Range and World-Consistent Video Diffusion for Explorable 3D Scene Generation

Tianyu Huang*, Wangguandong Zheng*, Tengfei Wang, Yuhao Liu, Zhenwei Wang, Junta Wu, Jie Jiang, Hui Li, Rynson W.H. Lau, Wangmeng Zuo, Chunchao Guo

ACM Trans. on Graphics (Proc. ACM SIGGRAPH 2025, Journal), December 2025

Long-range 3D world exploration with RGB-D(epth) video diffusion.

SIGGRAPH Asia 2025

Shape-for-Motion: Precise and Consistent Video Editing with 3D Proxy

Yuhao Liu, Tengfei Wang, Fang Liu, Zhenwei Wang, Rynson W.H. Lau

Proc. ACM SIGGRAPH Asia, December 2025

Diverse and precise video object manipulation with 3D proxy and diffusion rendering.

CVPR 2025

MAGE: Single Image to Material-Aware 3D via the Multi-View G-Buffer Estimation Model

Haoyuan Wang*, Zhenwei Wang*, Xiaoxiao Long, Cheng Lin, Gerhard Hancke, Rynson W.H. Lau

CVPR, June 2025

A G-buffer estimation model for single image to high-quality material-aware 3D reconstruction.

ICLR 2025

Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion

Zhenwei Wang*, Tengfei Wang*, Zexin He, Gerhard Hancke, Ziwei Liu, Rynson W.H. Lau

ICLR, April 2025

A 3D diffusion model with RAG, supporting 3D generation from text, image, and existing 3D models.

SIGGRAPH 2024

ThemeStation: Generating Theme-Aware 3D Assets from Few Exemplars

Zhenwei Wang, Tengfei Wang, Gerhard Hancke, Ziwei Liu, Rynson W.H. Lau

Proc. ACM SIGGRAPH, August 2024

Generate a gallery of 3D assets with consistent themes from a few exemplars.

AAAI 2024
RRL-Net Shadow Removal Project

Recasting Regional Lighting for Shadow Removal

Yuhao Liu, Zhanghan ke, Ke Xu, Fang Liu, Zhenwei Wang, Rynson W.H. Lau

AAAI, February 2024

Shadow removal approach that corrects degraded textures in shadow regions conditioned on recovered illumination.

SIGGRAPH 2023
LangRecol Language-based Photo Recoloring

Language-based Photo Color Adjustment for Graphic Designs

Zhenwei Wang*, Nanxuan Zhao*, Gerhard Hancke, Rynson W.H. Lau

ACM Trans. on Graphics (Proc. ACM SIGGRAPH 2023, Journal), August 2023

Present LangRecol, a novel language-based approach for recoloring photos in graphic designs.

Experience and Collaborations

Experience and Collaborations
Dundun
Hi, I'm Dundun! 🐱
Welcome to my master's homepage. Hope you enjoy browsing through his amazing research work!