Highlights
- Pro
Stars
A Survey on Reinforcement Learning of Vision-Language-Action Models for Robotic Manipulation
[NeurIPS 2025 DB Track] 3EED: Ground Everything Everywhere in 3D
[NeurIPS 2025] Pixel-Perfect Depth
Official implementation of TrajBooster
A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (VLN), and related multimodal learning approaches.
Infinite Photorealistic Worlds using Procedural Generation
ComfyMind: Toward General-Purpose Generation via Tree-Based Planning and Reactive Feedback
official repo for AGNOSTOS, a cross-task manipulation benchmark, and X-ICM method, a cross-task in-context manipulation (VLA) method
This is the official code repo for GLOVER and GLOVER++.
[NeurIPS 2025 Spotlight 🎊] DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable Policy
This is the official repo for [CVPR 2025] paper, Mitigating the Human-Robot Domain Discrepancy in Visual Pre-training for Robotic Manipulation. https://jiaming-zhou.github.io/projects/HumanRobotAlign/
Low-level locomotion policy training in Isaac Lab
[CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding
[ICRA 2024]: Train your parkour robot in less than 20 hours.
Automated Apple Music Lossless Sample Rate Switching for Audio Devices on Macs.
This is the official repo for [CoRL 2024] Contrastive Imitation Learning for Language-guided Multi-Task Robotic Manipulation
A curated list of awesome open-source grasping libraries and resources
[CVPR 2023] Official repository for downloading, processing, visualizing, and training models on the ARCTIC dataset.
