Skip to content

wayrise/DexGarmentLab

Repository files navigation

DexGarmentLab:
Dexterous Garment Manipulation Environment with
Generalizable Policy

NeurIPS 2025 Spotlight


DexGarmentLab includes three major components:

  • Environment: We propose Dexterous Garment Manipulation Environment with 15 different task scenes (especially for bimanual coordination) based on 2500+ garments.
  • Automated Data Collection: Because of the same structure of category-level garment, category-level generalization is accessible, which empowers our proposed Automated Data Collection Pipeline to handle different position, deformation and shapes of garment with task config (including grasp position and task sequence) and grasp hand pose provided by single expert demonstration.
  • Generalizable Policy: With diverse collected demonstration data, we introduce Hierarchical gArment manipuLation pOlicy (HALO) , combining affordance points and trajectories to generalize across different attributes in different tasks.

πŸ“’ MileStone

  • (2025.04.25) DexGarmentLab Simulation Environment Release !

  • (2025.04.25) DexGarmentLab Automated Data Collection Pipeline Release !

  • (2025.05.09) DexGarmentLab Baselines and Generalizable Policy Release !

  • (2025.05.09) DexGarmentLab Policy Validation Environment Release !

  • (2025.05.10) DexGarmentLab Dataset of Garment Manipulation Tasks Release !

πŸ“– Usage

1. IsaacSim Download

DexGarmentLab is built upon IsaacSim 4.5.0, please refer to NVIDIA Official Document for download.

We recommend placing the Isaac Sim source folder at `~/isaacsim_4.5.0` to match the Python interpreter path specified in the `.vscode/settings.json` file we provide. If you prefer to use a custom location, please make sure that the Python interpreter path in `.vscode/settings.json` is updated accordingly.

We will use ~/isaacsim_4.5.0/python.sh to run the isaacsim's python file. To facilitate the running, we can define a new alias in '.bashrc' file.

echo 'alias isaac="~/isaacsim_4.5.0/python.sh"' >> ~/.bashrc
source ~/.bashrc

2. Pull Repo

git clone git@github.com:wayrise/DexGarmentLab.git

3. Project Assets Download

Download Robots, LeapMotion, Scene, Garment directory from huggingface.

We provide automated download script in the Assets directory.

Following the instructions, you can download all the assets.

isaac Assets/assets_download.py
unzip Robots.zip -d ./Assets
unzip LeapMotion.zip -d ./Assets
unzip Scene.zip -d ./Assets
unzip Garment.zip -d ./Assets
unzip Human.zip -d ./Assets

4. Additional Environment Dependencies for Project

isaac -m pip install -r requirements.txt

πŸ•οΈ Simulation Environment

We introduce 15 garment manipulation tasks across 8 categories, encompassing:

  • Garment-Self-Interaction Task: Fling Tops, Fling Dress, Fling Trousers, Fold Tops, Fold Dress, Fold Trousers. The key variables include garment position, orientation, and shape.

  • Garment-Environment-Interaction Task: Hang Dress, Hang Tops, Hang Trousers, Hang Coat, Wear Scarf, Wear Bowl Hat, Wear Baseball Cap, Wear Glove, Store Tops. The key variables include garment position, garment orientation, garment shape and environment-interaction assets positions (e.g., hangers, pothooks, humans, etc.)

you can run python files in 'Env_StandAlone' using following commands:

# e.g. Fixed Garment Shape, Position, Orientation and Environment Assets Position
isaac Env_StandAlone/Hang_Coat_Env.py

# There are some args you can choose
# 1. --env_random_flag : 
#   True/False, Whether enable environment randomization (including position)
#   This flag only work when task belongs to Garment-Environment-Interaction Task
# 2. --garment_random_flag: 
#   True/False, Whether enable garment randomization (including position, orientation, shape)
# 3. --record_video_flag: 
#   True/False, Whether record whole-procedure video.
# 4. --data_collection_flag: 
#   True/False, Whether collect data (for policy training).

# e.g.
isaac Env_StandAlone/Hang_Coat_Env.py --env_random_flag True --garment_random_flag True 
# means in Hang_Coat_Env, enable environment and garment randomization and execute the program.

βš’οΈ Automated Data Collection

Autually our data collection procedure has been embedded into Env_StandAlone/<Task_Name>_Env.py mentioned above. The only required step is to set --data_collection_flag to True.

We provide Data_Collection.sh for convenience:

# usage template: bash Data_Collection.sh <task_name> <demo_num>
# e.g.
bash Data_Collection.sh Hang_Coat 10

# 10 pieces of data will be saved into 'Data/Hang_Coat'.
# including:
# - final_state_pic: .png file, picture of final garment state, used for manual verification of task success.
# - train_data: .npz file, used for training data storage.
# - video: .mp4 file, recording whole-procedure video.
# - data_collection_log.txt: recording data collection result,  corresponding assets and task configurations.

You can also download our prepared data from huggingface and unzip them into Data folder. The file structure should be like:

Data/
β”œβ”€β”€ Hang_Coat/
β”‚   β”‚   β”œβ”€β”€ final_state_pic
β”‚   β”‚   β”œβ”€β”€ train_data
β”‚   β”‚   β”œβ”€β”€ video
β”‚   β”‚   └── data_collection_log.txt
......
β”œβ”€β”€ Fling_Dress/
β”‚   β”‚   β”œβ”€β”€ final_state_pic
β”‚   β”‚   β”œβ”€β”€ train_data
β”‚   β”‚   β”œβ”€β”€ video
β”‚   β”‚   └── data_collection_log.txt

we provide data-download script for convenience:

isaac Data/data_download.py
# after download, please unzip them into Data/

πŸš€ Generalizable Policy

Our policy HALO consists:

  • Garment Affordance Model (GAM), which is used to generate target manipulation points for robot's movement. The corrsponding affordance map will also be used as denosing condition for SADP.
  • Structure-Aware Diffusion Policy (SADP), which is used to generate robot's subsequent movement aware of garment's structure after moving to the target manipulation points.

They can be found all in 'Model_HALO/' directory.

GAM

The file structure of GAM is as follows:

GAM/
β”œβ”€β”€ checkpoints/    # checkpoints of trained GAM for different category garment
    β”œβ”€β”€Tops_LongSleeve/     # garment category
        β”œβ”€β”€assets_list.txt           # list of assets used for validation
        β”œβ”€β”€assets_training_list.txt  # list of assets used for training
        β”œβ”€β”€checkpoint.pth            # trained model
        β”œβ”€β”€demo_garment.ply          # demo garment point cloud
    ......
    β”œβ”€β”€Trousers/
β”œβ”€β”€ model                   # meta files of GAM
β”œβ”€β”€ GAM_Encapsulation.py    # encapsulation of GAM

For the detailed use of GAM, please refer to GAM_Usage.md. The files in 'Env_StandAlone/' also provide example of how to use GAM.

SADP

SADP is suitable for Garment-Environment-Interaction tasks. All the related tasks only have one stage.

  1. Installation
cd Model_HALO/SADP

isaac -m pip install -e .
  1. Data Preparation

We need to pre-process .npz data collected in 'Data/' to .zarr data for training.

The only thing you need to do is just runing 'data2zarr_sadp.sh' in 'Model_HALO/SADP'.

cd Model_HALO/SADP

# usage template: 
# bash data2zarr_sadp.sh <task_name> <stage_index> <train_data_num>
bash data2zarr_sadp.sh Hang_Coat 1 100

# Detailed parameters information can be found in the 'data2zarr_sadp.sh' file

The processed data will be saved in 'Model_HALO/SADP/data'. If you wanna train SADP in your headless service, please move the data to the same position.

  1. Training
cd Model_HALO/SADP

# usage template: 
# python train.py <task_name> <expert_data_num> <seed> <gpu_id> <DEBUG_flag>
bash train.sh Hang_Coat_stage_1 100 42 0 False

# Detailed parameters information can be found in the 'train.sh' file
# Before training, we recommend you to set DEBUG_flag to True to check the training process.

The checkpoints will be saved in 'Model_HALO/SADP/checkpoints'.

SADP_G

SADP_G is suitable for Garment-Self-Interaction tasks, which means the denosing conditions exclude interaction-object point cloud. Fold_Tops and Fold_Dress have three stages. Fold_Trousers, Fling_Dress, Fling_Tops have two stages. Fling_Trousers only have one stage.

All the procedure are the same as SADP.

  1. Installation
cd Model_HALO/SADP_G

isaac -m pip install -e .
  1. Data Preparation
cd Model_HALO/SADP

# usage template: 
# bash data2zarr_sadp_g.sh <task_name> <stage_index> <train_data_num>
bash data2zarr_sadp_g.sh Fold_Tops 2 100

# Detailed parameters information can be found in the 'data2zarr_sadp_g.sh' file
  1. Training
cd Model_HALO/SADP_G

# usage template: 
# python train.py <task_name> <expert_data_num> <seed> <gpu_id> <DEBUG_flag>
bash train.sh Fold_Tops_stage_2 100 42 0 False

# Detailed parameters information can be found in the 'train.sh' file
# Before training, we recommend you to set DEBUG_flag to True to check the training process.

🎯 IL_BASELINES

Here support two IL baselines: Diffusion Policy, Diffusion Policy 3D. Their usages are the same as SADP.

Diffusion Policy

  1. Installation
cd IL_Baselines/Diffusion_Policy

isaac -m pip install -e .
  1. Data Preparation
cd IL_Baselines/Diffusion_Policy

bash data2zarr_dp.sh Hang_Tops 1 100
  1. Train
cd IL_Baselines/Diffusion_Policy

bash train.sh Hang_Tops_stage_1 100 42 0 False

Diffusion Policy 3D

  1. Installation
cd IL_Baselines/Diffusion_Policy_3D

isaac -m pip install -e .
  1. Data Preparation
cd IL_Baselines/Diffusion_Policy_3D

bash data2zarr_dp3.sh Hang_Dress 1 100
  1. Training
cd IL_Baselines/Diffusion_Policy_3D

bash train.sh Hang_Dress_stage_1 100 42 0 False

πŸͺ„ Policy Validation

We provide HALO Validation file for all the tasks in 'Env_Validation/' folder. We provide 'Validation.sh' to validate the policy for different tasks.

# usage template:
# bash Validation.sh <task_name> <validation_num> <training_data_num>
bash Validation.sh Hang_Coat 100 100

# Detailed parameters information can be found in the 'Validation.sh' file

You can find how to load checkpoints and validate the policy through the files in 'Env_Validation/' folder and we summarize core code in Validation_Core.md.

πŸ” Task Extension

Based on our simulation environment, there are lots of tasks that can be extended. If you want to add a new task, you can follow the following steps:

  1. Define task sequence and organize task assets on your own.

  2. Define demo grasp points for GAM's reference. The usage of GAM can be found in GAM_Usage.md.

  3. Define demo hand grasp pose. We provide LeapMotion Solution for generating hand grasp pose through teleoperation. The usage of LeapMotion Solution can be found in LeapMotion_Guidance.md. The Guidance procedure has been demonstrated in ubuntu 20.04 and 22.04.

    After installing LeapMotion, you can run 'TeleOp_Env.py' to teleoperate ShadowHand and get hand grasp pose:

    isaac TeleOp_Env.py

You only need to use right hand in real world for teleoperating both hands in simulation, because the joints of left hand and right hand are symmetric. The corresponding joint states will be printed in terminal. You can copy the joint states and paste it into 'Env_Config/Robot/BimanualDex_Ur10e.py'.

  1. Refer to python files in 'Env_StandAlone/' for the implementation of new task.

Citation

If you find this repository useful in your research, please consider staring 🌟 this repo and citing πŸ“‘ our paper:

@misc{wang2025dexgarmentlab,
    title={DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable Policy},
    author={Yuran Wang and Ruihai Wu and Yue Chen and Jiarui Wang and Jiaqi Liang and Ziyu Zhu and Haoran Geng and Jitendra Malik and Pieter Abbeel and Hao Dong},
    year={2025},
    eprint={2505.11032},
    archivePrefix={arXiv},
    primaryClass={cs.RO},
    url={https://arxiv.org/abs/2505.11032},
}

About

[NeurIPS 2025 Spotlight 🎊] DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable Policy

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published