Yuze He1,2, Yanning Zhou2*, Wang Zhao1, Jingwen Ye2, Yushi Bai1,
Kaiwen Xiao2, Yong-Jin Liu1*, Zhongqian Sun2, Wei Yang2
1Tsinghua University 2Tencent AIPD
*Corresponding Authors
[2025/09/25] Code, dataset, pretrained checkpoints are released!
Set up a Python environment and install the required packages:
conda create -n charm python=3.9 -y
conda activate charm
# Install torch, torchvision based on your machine configuration
pip install torch==2.1.0 torchvision==0.16.0 --index-url https://download.pytorch.org/whl/cu118
# Install other dependencies
pip install -r requirements.txtThen download data and pretrained weights:
-
Our Model Weights: Download from our 🤗 Hugging Face repository (download here) and place them in
./ckpt/. -
Michelangelo’s Point Cloud Encoder: Download weights from Michelangelo’s Hugging Face repo and save them to
./ckpt/. -
Test data:
Download from this Google Drive link, then decompress the files into
./test_cases/; or you can download from our 🤗 Hugging Face datasets library:
from huggingface_hub import hf_hub_download, list_repo_files
# Get list of all files in repo
files = list_repo_files(repo_id="hyz317/CHARM", repo_type="dataset")
# Download each file
for file in files:
file_path = hf_hub_download(
repo_id="hyz317/CHARM",
filename=file,
repo_type="dataset",
local_dir='./test_cases'
)After downloading and organizing the files, your project directory should look like this:
- test_cases/
└── pc/ # Test point cloud data
- ckpt/
├── charm.safetensors # Our model checkpoint
└── shapevae-256.ckpt # Michelangelo ShapeVAE checkpoint
# Autoregressive generation
python infer.py
# Sample point clouds from predictions
python sample.py
# Calculate evaluation metrics
python eval.py# Note: file_dir in configs/train.yml should be changed to your processed dataset directory.
accelerate launch --config_file acc_configs/gpu8.yaml train.py -t YOUR_JOB_NAMEDue to policy restrictions, we are unable to redistribute the raw 3D models of the training dataset. However, you can download the VRoid dataset by following the instructions provided in PAniC-3D. Please note: When downloading the VRoid dataset, replace the metadata.json file mentioned in the PAniC-3D instructions with the file from this link. All other steps should follow the original PAniC-3D guide.
In place of the raw data, we are providing the preprocessing scripts and training data list.
First, install Blender and download the VRM Blender Add-on. Then, install the add-on using the following command:
blender --background --python blender/install_addon.py -- VRM_Addon_for_Blender-release.zipNext, execute the Blender script to separate the hair meshes from the 3D models:
cd blender
python distributed_uniform.py --input_dir /PATH/TO/YOUR/VROIDDATA --save_dir /PATH/TO/YOUR/SAVEDIR --workers 32/PATH/TO/YOUR/VROIDDATA: Specify the directory where you downloaded the VRoid dataset./PATH/TO/YOUR/SAVEDIR: Specify the directory where the separated hair models will be saved.
After separating the hair, run the following script to post-process the hairstyles and convert them into a template format:
python process_hair.py --input_dir /PATH/TO/YOUR/SAVEDIR/PATH/TO/YOUR/SAVEDIR: This should be the same directory where the separated hair models were saved in the previous step.
Finally, sample point clouds from the processed hair models:
python sample_hair.py --input_dir /PATH/TO/YOUR/SAVEDIR/PATH/TO/YOUR/SAVEDIR: Again, use the directory containing the post-processed hair models.
If you find our work useful, please kindly cite:
@article{he2025charm,
title={CHARM: Control-point-based 3D Anime Hairstyle Auto-Regressive Modeling},
author={He, Yuze and Zhou, Yanning and Zhao, Wang and Ye, Jingwen and Bai, Yushi and Xiao, Kaiwen and Liu, Yong-Jin and Sun, Zhongqian and Yang, Wei},
journal={arXiv preprint arXiv:2509.21114},
year={2025}
}
