LLMRouter is an intelligent routing system designed to optimize LLM inference by dynamically selecting the most suitable model for each query. To achieve intelligent routing, it defines:
- π Smart Routing: Automatically routes queries to the optimal LLM based on task complexity, cost, and performance requirements.
- π Multiple Router Models: Support for over 16 routing models, organized into four major categoriesβsingle-round routers, multi-round routers, agentic routers, and personalized routersβcovering a wide range of strategies such as KNN, SVM, MLP, Matrix Factorization, Elo Rating, graph-based routing, BERT-based routing, hybrid probabilistic methods, transformed-score routers, and more.
- π οΈ Unified CLI: Complete command-line interface for training, inference, and interactive chat with Gradio-based UI.
- π Data Generation Pipeline: Complete pipeline for generating training data from 11 benchmark datasets with automatic API calling and evaluation.
- π [2025-12]: LLMRouter is officially released - ship smarter π§ , cost-aware πΈ LLM routing with 16+ routers π§, a unified
llmrouterCLI π οΈ, and a plugin workflow for custom routers π§©.
- Supported Routers
- Installation
- Use Your Own Dataset
- Training a Router
- Running Inference via a Router
- Interactive Chat Interface with a Router
- Creating Your Own Routers
- Adding Your Own Tasks
- Acknowledgments
- Citation
| Router | Training | Inference | Description | Tutorial |
|---|---|---|---|---|
knnrouter |
β | β | K-Nearest Neighbors based routing | π |
svmrouter |
β | β | Support Vector Machine based routing | π |
mlprouter |
β | β | Multi-Layer Perceptron based routing | π |
mfrouter |
β | β | Matrix Factorization based routing | π |
elorouter |
β | β | Elo Rating based routing | π |
routerdc |
β | β | Dual Contrastive learning based routing | π |
automix |
β | β | Automatic model mixing | π |
hybrid_llm |
β | β | Hybrid LLM routing strategy | π |
graphrouter |
β | β | Graph-based routing | π |
causallm_router |
β | β | Causal Language Model router | π |
smallest_llm |
N/A | β | Always routes to smallest model | π |
largest_llm |
N/A | β | Always routes to largest model | π |
| Router | Training | Inference | Description | Tutorial |
|---|---|---|---|---|
router_r1 |
LINK | β | Pre-trained Router-R1 model for multi-turn conversations | π |
| Router | Training | Inference | Description | Tutorial |
|---|---|---|---|---|
gmtrouter |
β | β | Graph-based personalized router with user preference learning | π |
| Router | Training | Inference | Description | Tutorial |
|---|---|---|---|---|
knnmultiroundrouter |
β | β | KNN-based agentic router for complex tasks | π |
llmmultiroundrouter |
N/A | β | LLM-based agentic router for complex tasks | π |
Clone the repository and install in editable mode using a virtual environment (e.g., with anaconda3):
# Clone the repository
git clone https://github.com/ulab-uiuc/LLMRouter.git
cd LLMRouter
# Create and activate virtual environment
conda create -n llmrouter python=3.10
conda activate llmrouter
# Install the package (base installation)
pip install -e .
# Optional: Install with RouterR1 support (requires GPU)
# RouterR1 is tested with vllm==0.6.3 (torch==2.4.0); the extra pins these versions.
pip install -e ".[router-r1]"
# Optional: Install all optional dependencies
pip install -e ".[all]"pip install llmrouter-libLLMRouter requires API keys to make LLM API calls for inference, chat, and data generation. Set the API_KEYS environment variable using one of the following formats:
Use this format when you have models from different service providers (e.g., NVIDIA, OpenAI, Anthropic) and want to use different API keys for each provider:
export API_KEYS='{"NVIDIA": "nvidia-key-1,nvidia-key-2", "OpenAI": ["openai-key-1", "openai-key-2"], "Anthropic": "anthropic-key-1"}'Dict Format Details:
- Keys: Service provider names (must match the
servicefield in your LLM candidate JSON) - Values: Can be:
- Comma-separated string:
"key1,key2,key3" - JSON array:
["key1", "key2", "key3"] - Single string:
"key1"
- Comma-separated string:
- Service Matching: The system automatically matches the
servicefield from your LLM candidate JSON to select the appropriate API keys - Round-Robin: Each service maintains its own round-robin counter for load balancing
- Error Handling: If a service is not found in the dict, a clear error message will be raised with available services listed
Example LLM Candidate JSON with service field:
{
"qwen2.5-7b-instruct": {
"service": "NVIDIA",
"model": "qwen/qwen2.5-7b-instruct",
"api_endpoint": "https://integrate.api.nvidia.com/v1"
},
"gpt-4": {
"service": "OpenAI",
"model": "gpt-4",
"api_endpoint": "https://api.openai.com/v1"
}
}JSON Array Format (for multiple keys from same provider):
export API_KEYS='["your-key-1", "your-key-2", "your-key-3"]'Comma-Separated Format (alternative for multiple keys):
export API_KEYS='key1,key2,key3'Single Key (for one API key):
export API_KEYS='your-api-key'Notes:
- API keys are used for inference, chat interface, and data generation (Step 3 of the pipeline)
- Multiple keys enable automatic load balancing across API calls
- When using dict format, ensure the
servicefield in your LLM candidate JSON matches the keys in yourAPI_KEYSdict - The environment variable must be set before running inference, chat, or data generation commands
- For persistent setup, add the export command to your shell profile (e.g.,
~/.bashrcor~/.zshrc)
API endpoints can be specified at two levels (resolved in priority order):
- Per-Model (highest priority):
api_endpointfield in LLM candidate JSON (default_llm.json) - Router-Level (fallback):
api_endpointfield in router YAML config - Error: Raises descriptive error if neither is specified
LLM Candidate JSON (per-model endpoints):
{
"qwen2.5-7b-instruct": {
"model": "qwen/qwen2.5-7b-instruct",
"api_endpoint": "https://integrate.api.nvidia.com/v1",
...
},
"custom-model": {
"model": "custom/model-name",
"api_endpoint": "https://api.customprovider.com/v1",
...
}
}Router YAML (default endpoint):
api_endpoint: 'https://integrate.api.nvidia.com/v1' # Fallback for all modelsBenefits: Different models can use different providers; easy migration; backward compatible with router configs.
For details, see Data Generation Pipeline documentation.
LLMRouter supports locally hosted LLM inference servers that provide OpenAI-compatible APIs (e.g., Ollama, vLLM, SGLang). For local providers, you can use an empty string "" as the API key value - the system automatically detects localhost endpoints and handles authentication accordingly.
Example with Ollama:
export API_KEYS='{"Ollama": ""}'{
"gemma3": {
"size": "3B",
"feature": "Gemma 3B model hosted locally via Ollama",
"input_price": 0.0,
"output_price": 0.0,
"model": "gemma3",
"service": "Ollama",
"api_endpoint": "http://localhost:11434/v1"
}
}Important: Use the /v1 endpoint (OpenAI-compatible), not the native API endpoints. Empty strings are automatically detected for localhost endpoints (localhost or 127.0.0.1).
LLMRouter includes a complete data generation pipeline that transforms raw benchmark datasets into formatted routing data with embeddings. The pipeline supports 11 diverse benchmark datasets including Natural QA, Trivia QA, MMLU, GPQA, MBPP, HumanEval, GSM8K, CommonsenseQA, MATH, OpenbookQA, and ARC-Challenge.
The data generation pipeline consists of three main steps:
- Generate Query Data - Extract queries from benchmark datasets and create train/test split JSONL files
- Generate LLM Embeddings - Create embeddings for LLM candidates from their metadata
- API Calling & Evaluation - Call LLM APIs, evaluate responses, and generate unified embeddings + routing data
Start with the sample configuration file:
# Step 1: Generate query data
python llmrouter/data/data_generation.py --config llmrouter/data/sample_config.yaml
# Step 2: Generate LLM embeddings
python llmrouter/data/generate_llm_embeddings.py --config llmrouter/data/sample_config.yaml
# Step 3: API calling & evaluation (requires API_KEYS - see "Setting Up API Keys" section above)
python llmrouter/data/api_calling_evaluation.py --config llmrouter/data/sample_config.yaml --workers 100The pipeline generates the following files:
- Query Data (JSONL):
query_data_train.jsonlandquery_data_test.jsonl- Query data with train/test split - LLM Embeddings (JSON):
default_llm_embeddings.json- LLM metadata with embeddings - Query Embeddings (PyTorch):
query_embeddings_longformer.pt- Unified embeddings for all queries - Routing Data (JSONL):
default_routing_train_data.jsonlanddefault_routing_test_data.jsonl- Complete routing data with model responses, performance scores, and token usage
Example routing data entry:
{
"task_name": "gsm8k",
"query": "Janet has 4 apples. She gives 2 to Bob. How many does she have left?",
"ground_truth": "2",
"metric": "GSM8K",
"model_name": "llama3-chatqa-1.5-8b",
"response": "Janet has 4 apples and gives 2 to Bob, so she has 4 - 2 = 2 apples left.",
"performance": 1.0,
"embedding_id": 42,
"token_num": 453
}All paths and parameters are controlled via YAML configuration. The sample config file (llmrouter/data/sample_config.yaml) references the example data directory and can be used as-is or customized for your setup.
Note: Step 3 requires API keys for calling LLM services. See the Setting Up API Keys section above for configuration details.
For complete documentation including detailed file formats, embedding mapping system, configuration options, and troubleshooting, see llmrouter/data/README.md.
Before training, ensure you have prepared your data using the Data Generation Pipeline or use the example data in data/example_data/.
Train various router models with your configuration:
# Train KNN router
llmrouter train --router knnrouter --config configs/model_config_train/knnrouter.yaml
# Train MLP router with GPU
CUDA_VISIBLE_DEVICES=2 llmrouter train --router mlprouter --config configs/model_config_train/mlprouter.yaml --device cuda
# Train MF router quietly
CUDA_VISIBLE_DEVICES=1 llmrouter train --router mfrouter --config configs/model_config_train/mfrouter.yaml --device cuda --quietPerform inference with trained routers (requires API keys - see Setting Up API Keys section):
# Single query inference
llmrouter infer --router knnrouter --config config.yaml --query "What is machine learning?"
# Batch inference from file
llmrouter infer --router knnrouter --config config.yaml --input queries.txt --output results.json
# Route only (without calling LLM API - no API keys needed)
llmrouter infer --router knnrouter --config config.yaml --query "Hello" --route-only
# Custom generation parameters
llmrouter infer --router knnrouter --config config.yaml --query "Explain AI" --temp 0.7 --max-tokens 2048 --verboseInput file formats supported: .txt (one query per line), .json (list of strings or objects with "query" field), .jsonl (one JSON object per line).
π± Quick Preview: Animated overview of the LLMRouter chat interface showing real-time routing and model selection.
Launch the chat interface (requires API keys - see Setting Up API Keys section):
# Basic chat interface
llmrouter chat --router knnrouter --config config.yaml
# Custom host and port
llmrouter chat --router knnrouter --config config.yaml --host 0.0.0.0 --port 7860
# With public sharing link
llmrouter chat --router knnrouter --config config.yaml --share
# Specify query mode
llmrouter chat --router knnrouter --config config.yaml --mode full_context --top_k 5Query Modes:
current_only: Routes based on current query only (default)full_context: Combines all chat history with current queryretrieval: Retrieves top-k similar historical queries for context
You can also run the CLI scripts directly:
# Training
python -m llmrouter.cli.router_train --router knnrouter --config config.yaml
# Inference
python -m llmrouter.cli.router_inference --router knnrouter --config config.yaml --query "Hello"
# Chat
python -m llmrouter.cli.router_chat --router knnrouter --config config.yamlLLMRouter supports a plugin system that allows you to add custom router implementations without modifying the core codebase. This makes it easy to experiment with new routing strategies or domain-specific routers.
1. Create your router directory:
mkdir -p custom_routers/my_router2. Implement your router (custom_routers/my_router/router.py):
from llmrouter.models.meta_router import MetaRouter
import torch.nn as nn
class MyRouter(MetaRouter):
"""Your custom router implementation."""
def __init__(self, yaml_path: str):
# Initialize with a model (can be nn.Identity() for simple routers)
model = nn.Identity()
super().__init__(model=model, yaml_path=yaml_path)
# Get available LLM names from config
self.llm_names = list(self.llm_data.keys())
def route_single(self, query_input: dict) -> dict:
"""Route a single query to the best LLM."""
query = query_input['query']
# Your custom routing logic here
# Example: route based on query length
selected_llm = (self.llm_names[0] if len(query) < 50
else self.llm_names[-1])
return {
"query": query,
"model_name": selected_llm,
"predicted_llm": selected_llm,
}
def route_batch(self, batch: list) -> list:
"""Route multiple queries."""
return [self.route_single(q) for q in batch]3. Create configuration (custom_routers/my_router/config.yaml):
data_path:
llm_data: 'data/example_data/llm_candidates/default_llm.json'
hparam:
# Your hyperparameters here
# Optional: Default API endpoint (used as fallback if models don't specify their own)
# Individual models can override this by specifying api_endpoint in the llm_data JSON file
api_endpoint: 'https://integrate.api.nvidia.com/v1'4. Use your custom router (same as built-in routers!):
# Inference
llmrouter infer --router my_router \
--config custom_routers/my_router/config.yaml \
--query "What is machine learning?"
# List all routers (including custom ones)
llmrouter list-routersCustom routers are automatically discovered from:
./custom_routers/(recommended - project directory)~/.llmrouter/plugins/(user home directory)$LLMROUTER_PLUGINSenvironment variable (colon-separated paths)
LLMRouter includes example custom routers you can learn from:
RandomRouter - Simple baseline that randomly selects an LLM
llmrouter infer --router randomrouter \
--config custom_routers/randomrouter/config.yaml \
--query "Hello world"ThresholdRouter - Advanced trainable router with difficulty estimation
# Train the router
llmrouter train --router thresholdrouter \
--config custom_routers/thresholdrouter/config.yaml
# Use for inference
llmrouter infer --router thresholdrouter \
--config custom_routers/thresholdrouter/config.yaml \
--query "Explain quantum computing"For detailed guides on creating custom routers:
- π Quick Start: custom_routers/README.md
- π Implementation Summary: CUSTOM_ROUTER_SUMMARY.md
Rule-based routing:
def route_single(self, query_input):
query = query_input['query'].lower()
if 'code' in query:
return {"model_name": "code-specialist"}
elif len(query) < 50:
return {"model_name": "small-fast-model"}
else:
return {"model_name": "large-capable-model"}Embedding-based routing:
from llmrouter.utils import get_longformer_embedding
def route_single(self, query_input):
embedding = get_longformer_embedding(query_input['query'])
# Use embedding similarity to select best model
selected = self._find_best_model(embedding)
return {"model_name": selected}Cost-optimized routing:
def route_single(self, query_input):
difficulty = self._estimate_difficulty(query_input)
# Select cheapest model that can handle the difficulty
for model_name, info in sorted(self.llm_data.items(),
key=lambda x: x[1]['cost']):
if info['capability'] >= difficulty:
return {"model_name": model_name}LLMRouter supports custom task definitions that allow you to add new task types with custom prompt templates and evaluation metrics. Custom tasks are automatically discovered and integrated into the data generation and evaluation pipeline.
1. Create a task formatter (custom_tasks/my_tasks.py):
from llmrouter.utils.prompting import register_prompt
from llmrouter.prompts import load_prompt_template
@register_prompt('my_task', default_metric='my_metric')
def format_my_task_prompt(sample_data):
system_prompt = load_prompt_template("task_my_task")
user_query = f"Question: {sample_data.get('query', '')}"
return {"system": system_prompt, "user": user_query}2. Create a prompt template (custom_tasks/task_prompts/task_my_task.yaml):
template: |
You are an expert at [task description]. [Instructions].3. Register a custom metric (optional):
from llmrouter.evaluation import evaluation_metric
@evaluation_metric('my_metric')
def my_metric(prediction: str, ground_truth: str, **kwargs) -> float:
return 1.0 if prediction == ground_truth else 0.04. Use your custom task:
import custom_tasks.my_tasks # Import triggers registration
from llmrouter.utils import generate_task_query
from llmrouter.utils.evaluation import calculate_task_performance
# Generate prompt
prompt = generate_task_query('my_task', {'query': '...'})
# Evaluate (metric automatically inferred from task)
score = calculate_task_performance(
prediction="...",
ground_truth="...",
task_name="my_task"
)For detailed guides on creating custom tasks:
- π Complete Guide: custom_tasks/README.md
- Improve personalized routers: stronger user profiling, cold-start strategies, and online feedback updates.
- Integrate a multimodal router: support image/audio inputs and route by modality + task type to the right multimodal model.
- Add continual/online learning to adapt routers to domain drift (e.g., periodic re-training + feedback loops).
LLMRouter builds upon the excellent research from the community. We gratefully acknowledge the following works that inspired our router implementations:
- RouteLLM - Learning to Route LLMs with Preference Data (ICLR 2025)
- RouterDC - Query-Based Router by Dual Contrastive Learning (NeurIPS 2024)
- AutoMix - Automatically Mixing Language Models (NeurIPS 2024)
- Hybrid LLM - Cost-Efficient and Quality-Aware Query Routing (ICLR 2024)
- GraphRouter - A Graph-based Router for LLM Selections (ICLR 2025)
- GMTRouter - Personalized LLM Router over Multi-turn User Interactions
- Router-R1 - Teaching LLMs Multi-Round Routing and Aggregation via RL (NeurIPS 2025)
- FusionFactory - Fusing LLM Capabilities with Multi-LLM Log Data
We warmly welcome contributions from the community! A powerful open-source router framework requires the collective effort of everyone. If you have developed a new routing method, please consider submitting a PR to add it to LLMRouter. Together, we can build the most comprehensive LLM routing library!
We warmly welcome contributions from the community. LLMRouter is a living, extensible research framework, and its impact grows through the creativity and expertise of its contributors.
If you have developed a new routing strategy, learning objective, training paradigm, or evaluation protocol, we strongly encourage you to submit a pull request to integrate it into LLMRouter. All accepted contributions are explicitly credited, documented, and made available to a broad research and practitioner audience.
Contributing to LLMRouter is more than adding code. It is an opportunity to increase the visibility, adoption, and long-term impact of your work within the LLM systems community. Together, we aim to build the most comprehensive and extensible open-source library for LLM routing.
Notable contributions may be highlighted in documentation, examples, benchmarks, or future releases.
If you find LLMRouter useful for your research or projects, please cite it as:
@misc{llmrouter2025,
title = {LLMRouter: An Open-Source Library for LLM Routing},
author = {Tao Feng and Haozhen Zhang and Zijie Lei and Haodong Yue and Chongshan Lin and Jiaxuan You},
year = {2025},
howpublished = {\url{https://github.com/ulab-uiuc/LLMRouter}},
note = {GitHub repository}
}

