This repository provides the implementation for our manifold-aware transformers, a novel neural network architecture for predicting the dynamics of garments. It is based on our work Neural Garment Dynamics via Manifold-Aware Transformers that is published in EUROGRAPHICS 2024.
This code has been tested under Ubuntu 20.04. Before starting, please configure your Anaconda environment by
conda env create -f environment.yml
conda activate manifold-aware-transformersAlternatively, you may install the following packages (and their dependencies) manually:
- numpy == 1.23.1 (note
numpy.boolis deprecated in higher version and it causes an error when loading SMPL model) - pytorch == 2.0.1
- scipy >= 1.10.1
- cholespy == 1.0.0
- scikit-sparse == 0.4.4
- libigl == 2.4.1
- tensorboard >= 2.12.1
- tqdm >= 4.65.0
- chumpy == 0.70
We provide several pre-trained models trained on different datasets. Download the pre-trained models and the example sequences from Google Drive. Please extract the pre-trained models and example sequences, and put them under the pre-trained and data directory directly under the root of the project directory, respectively.
Run demo.sh.
The prediction of the network will be stored in [path to pre-trained model]/sequence/prediction.pkl. The corresponding ground truth and body motion are stored in gt.pkl and body.pkl respectively. Please refer to here for the specification and visualization of the predicted mesh sequence.
We use a custom format to store a sequence of meshes. The specific format can be found in the function write_vert_pos_pickle() in mesh_utils.py.
We provide a plugin for visualizing the mesh sequences directly in Blender here. It is based on the STOP-motion-OBJ plugin by @neverhood311.
Coming soon.
The input garment geometry is decimated to improve the running efficiency. A separate module implemented with C++ is required.
Our model requires the signed distance function from the garment geometry to the body geometry as input. It is calculated on the fly for inference time using a highly optimized GPU implementation.
The current implementation for inference requires the same format, and you may use the following steps to preprocess the data for inference as well.
Please download the VTO dataset from here, and run the following command to preprocess the data:
python parse_data_vto.py --data_path_prefix=[path to downloaded vto dataset] --save_path=[path to save the preprocessed data]Coming soon.
Coming soon.
The code in dataset/smpl.py is adapted from SMPL by @CalciferZh.
If you use this code for your research, please cite our paper:
@article{Li2024NeuralGarmentDynamics,
author = {Li, Peizhuo and Wang, Tuanfeng Y. and Kesdogan, Timur Levent and Ceylan, Duygu and Sorkine-Hornung, Olga},
title = {Neural Garment Dynamics via Manifold-Aware Transformers},
journal = {Computer Graphics Forum (Proceedings of EUROGRAPHICS 2024)},
volume = {43},
number = {2},
year = {2024},
}