- Beijing, China
- http://blog.csdn.net/v_july_v
Stars
This repository implements teleoperation of the Unitree humanoid robot using XR Devices.
[IROS 2025] Generalizable Humanoid Manipulation with 3D Diffusion Policies. Part 1: Train & Deploy of iDP3
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.
[ICLR2024] The official implementation of paper "VDT: General-purpose Video Diffusion Transformers via Mask Modeling", by Haoyu Lu, Guoxing Yang, Nanyi Fei, Yuqi Huo, Zhiwu Lu, Ping Luo, Mingyu Ding.
This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.
Open-Sora: Democratizing Efficient Video Production for All
Universal LLM Deployment Engine with ML Compilation
Imitation learning algorithms with Co-training for Mobile ALOHA: ACT, Diffusion Policy, VINN
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Example models using DeepSpeed
Official inference library for Mistral models
Using Low-rank adaptation to quickly fine-tune diffusion models.
SD-Trainer. LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model.
Ongoing research training transformer models at scale
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
闻达:一个LLM调用平台。目标为针对特定环境的高效内容生成,同时考虑个人和中小企业的计算资源局限性,以及知识安全和私密性问题
CodeGeeX2: A More Powerful Multilingual Code Generation Model
基于LangChain和ChatGLM-6B等系列LLM的针对本地知识库的自动问答
Use ChatGPT to summarize the arXiv papers. 全流程加速科研,利用chatgpt进行论文全文总结+专业翻译+润色+审稿+审稿回复
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, m…
Crowdfunding open source projects: use OpenReview's high-quality review data to fine-tune a professional review and response LLM. 众筹开源项目:利用OpenReview的优质审稿数据,微调出一个专业的审稿和审稿回复GPT
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.

