This is an official repository for the GIGA project.
- NVIDIA GPU (tested on RTX 3090, A100, H100).
- A working CUDA installation.
- Preferrably Linux (should work on Windows too though).
- Install uv to manage Python packaging, your life will become much easier.
This repo was tested with CUDA 12.8. The easiest way is to install it using conda, mamba or micromamba:
mamba create -f environment.ymlWe will have to link some of the CUDA headers to an appropriate place, so please run this command too:
mamba activate giga
ln -s $CONDA_PREFIX/targets/x86_64-linux/include/* $CONDA_PREFIX/includeIf you have CUDA 12.8 already installed on your system, or you are on Windows and symlinking might be an issue, you can avoid installing CUDA from the anaconda packages.
Make sure to set the following environment variables correctly:
export CUDA_HOME=/usr/local/cuda-12.8 # might be in /usr/lib/cuda-12.8
# or on Windows, in /c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.8
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64To verify that CUDA is set correctly, run nvcc -V. You should see that your CUDA compiler has version 12.8
Register at the official website of SMPLX, accept the terms of license and download .npz files for male, female and neutral models; also download the .npz with the UV parameterization and put them in the $SMPLX_DATA folder.
When CUDA is properly set, run this command (it might take a while to correctly trace dependencies for gsplat):
uv syncWe use wandb to log training progress - we recommed to use it too, if you want to train GIGA.
Training parameters are configured in a .yaml file; the dataset(s) use a separate configuration file. We provide examples in the repo.
In this repo, training and testing functionality is presented only for MVHumanNet, MVHumanNet++ and DNA-Rendering datasets.
Download them first, then make sure to set correct paths to the datasets in each of the corresponding dataset configs (for example, ./configs/mvh/train_data.yaml).
To work with other datasets, follow DATASETS.md for instructions.
To train/evaluate GIGA designed for monocular inputs, you will have to prepare static RGB textures with the following script:
python scripts/generate_static_textures.py \
configs/mvh/texture_data.yaml \
/path/to/output/textures \ # textures will be saved here
--smplx-path $SMPLX_DATA \
--texture-resolution 1024This creates approximate textures from multi-view A-pose images (if they are available, otherwise the images are drawn for the pose specified in the dataset config). Once textures are ready, set texture_dir field in the dataset config to /path/to/output/textures.
Starting training is simple:
# if you installed CUDA with mamba
mamba activate giga
source .venv/bin/activate
# source .venv/Scripts/activate.bat on Windows
python scripts/train.py \
--model-config configs/mvh/giga.yaml \
--dataset-config configs/mvh/texture_data.yaml \
--smplx-path $SMPLX_DATA # directory with SMPLX .npz models
# to override config parameters from CLI:
# experiment_name=giga_alternative \
# trainer.compile=FalseTips:
- Provide multiple dataset configs with commas:
--dataset-config configs/datasets/a.yaml,configs/datasets/b.yamlif you want to train on a mixture of datasets. - Specify
--resume-id <WANDB_ID>to resume training logged to the<WANDB_ID>experiment. - For training on SLURM clusters, check out the
slurm_train_job.pyscript. Other clusters have not been explored, feel free to adapt for your case.
We provide checkpoints of GIGA trained on MVHumanNet and DNA-Rendering, download them with this command:
bash download_checkpoint.sh mvh # or dnaTo render virtual humans from selected cameras and evaluate metrics on them, use this:
mamba activate giga
source .venv/bin/activate
python scripts/test.py evaluate \
--model-config configs/mvh/giga.yaml \
--dataset-config configs/mvh/eval_data.yaml \
--output-path /path/to/output \
--smplx-path $SMPLX_DATA \
--actor-id '100027' # example actor id from MVH datasetModel configs specified with --model-config should be in the same directory with respective checkpoints. Downloading script will take care of this for the provided checkpoints.
If you only need to render images without computing metrics:
mamba activate giga
source .venv/bin/activate
python scripts/test.py render \
--model-config configs/mvh/giga.yaml \
--dataset-config configs/mvh/eval_data.yaml \
--output-path /path/to/output \
--smplx-path $SMPLX_DATA \
--actor-id '100027'In general, running python scripts/test.py --help and python scripts/test.py evaluate --help will help you to understand arguments better.
This is the script to render a circular camera trajectory around the actor:
mamba activate giga
source .venv/bin/activate
python scripts/test.py freeview \
--model-config configs/mvh/giga.yaml \
--dataset-config configs/mvh/eval_data.yaml \
--output-path /path/to/output \
--smplx-path $SMPLX_DATA \
--actor-id '100027' \
--trajectory-mode orbit \
--up +yNotes:
--trajectory-modecan be eitherorbitfor a circular orbiting camera around the actor, orinterpolateto interpolate between views from the dataset.- Run
python scripts/test.py freeview --helpfor help with other options.
This repo has been also tested with torch==2.6.0+cu124, torch==2.7.0, torch==2.7.1. In fact, any working combination of pytorch (not older than 2.6.0) and CUDA should work. If you want to use different pytorch and CUDA, modify pyproject.toml and run uv sync --reinstall - this exercise is left for the reader.
- The SO3 manipulation module was borrowed from abcamiletto.
- Throughout this project, the Claude Sonnet family (3.5, 3.7 and 4) and Gemini 2.5 Pro provided invaluable help.
@article{zubekhin2025giga,
title={GIGA: Generalizable Sparse Image-driven Gaussian Humans},
author={Zubekhin, Anton and Zhu, Heming and Gotardo, Paulo and Beeler, Thabo and Habermann, Marc and Theobalt, Christian},
year={2025},
journal={arXiv},
eprint={2504.07144},
}