Highlights
Stars
The Superior Lisp Interaction Mode for Emacs
Post-training with Tinker
Fast and memory-efficient exact attention
MiniMax-M2, a model built for Max coding & agentic workflows.
Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
Tongyi Deep Research, the Leading Open-source Deep Research Agent
Dingo: A Comprehensive AI Data, Model and Application Quality Evaluation Tool
Our library for RL environments + evals
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
Text-audio foundation model from Boson AI
Step-Audio 2 is an end-to-end multi-modal large language model designed for industry-strength audio understanding and speech conversation.
About Awesome things towards foundation agents. Papers / Repos / Blogs / ...
JAX (Flax) implementation of algorithms for Deep Reinforcement Learning with continuous action spaces.
[NeurIPS 2025] PyTorch implementation of [ThinkSound], a unified framework for generating audio from any modality, guided by Chain-of-Thought (CoT) reasoning.
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
[ICASSP 2024] This is the official code for "VoiceFlow: Efficient Text-to-Speech with Rectified Flow Matching"
A project page template for academic papers. Demo at https://eliahuhorwitz.github.io/Academic-project-page-template/
Train transformer language models with reinforcement learning.


