Lists (22)
Sort Name ascending (A-Z)
algorithm
computer vision
Embodied AI
hr
humanoid
interesting projects
Isaac
manipulation
🌟paper list
planning
pure rl
🔮 qr_labs
四足相关实验室qr_literature
四足文献汇总qr_manipulation
qr_model_base
四足机器人基于模型的控制器qr_navigation
四足导航qr_reinforcement_learning
四足机器人强化学习控制器qr_SLAM
reinforcement_learning
强化学习算法tools
VLA
.VLN
.Stars
Official implementation of [AstraNav-World: World Model for Foresight Control and Consistency]
📚这个仓库是在arxiv上收集的有关VLN,VLA,World Model,SLAM,Gaussian Splatting,非线性优化等相关论文。每天都会自动更新!issue区域是最新10篇论文
StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing
[RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions
Open & Reproducible Research for Tracking VLAs
Simulated experiments for "Real-Time Execution of Action Chunking Flow Policies".
Official repository for OmniVLA training and inference code
A simulation platform for versatile Embodied AI research and developments.
[RSS'25] This repository is the implementation of "NaVILA: Legged Robot Vision-Language-Action Model for Navigation"
[CoRL 2025] GC-VLN: Instruction as Graph Constraints for Training-free Vision-and-Language Navigation
[CVPR 2025] UniGoal: Towards Universal Zero-shot Goal-oriented Navigation
[NeurIPS 2024] SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation
[CVPR 2025] Source codes for the paper "3D-Mem: 3D Scene Memory for Embodied Exploration and Reasoning"
RLinf: Reinforcement Learning Infrastructure for Embodied and Agentic AI
[RSS 2025] Uni-NaVid: A Video-based Vision-Language-Action Model for Unifying Embodied Navigation Tasks.
[IROS‘2025] Interactive Navigation with Learned Arm-pushing Controller
This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.
[RA-L 2025] FrontierNet: Learning Visual Cues to Explore
VLA-0: Building State-of-the-Art VLAs with Zero Modification
[ICLR2026] Official implementation for "JanusVLN: Decoupling Semantics and Spatiality with Dual Implicit Memory for Vision-Language Navigation"
SGLang is a high-performance serving framework for large language models and multimodal models.
Vision-and-Language Navigation in Continuous Environments using Habitat
Official code and checkpoint release for mobile robot foundation models: GNM, ViNT, and NoMaD.
InternRobotics' open platform for building generalized navigation foundation models.
RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning