Use Anthropic clients (like Claude Code) with Gemini or OpenAI backends. 🤝
A proxy server that lets you use Anthropic clients with Gemini or OpenAI models via LiteLLM. 🌉
- LiteLLM Proxy Server and API Key
- uv installed.
-
Clone this repository:
git clone https://github.com/1rgs/claude-code-openai.git cd claude-code-openai -
Install uv (if you haven't already):
curl -LsSf https://astral.sh/uv/install.sh | sh(
uvwill handle dependencies based onpyproject.tomlwhen you run the server) -
Configure Environment Variables: Copy the example environment file:
cp .env.example .env
Edit
.envand fill in your API keys and model configurations:OPENAI_API_KEY: Your LiteLLM API key (Required).OPENAI_API_BASE: Your LiteLLM server URL (Required).PREFERRED_PROVIDER: Set toopenai(default). This determines the primary backend for mappinghaiku/sonnet.BIG_MODEL(Optional): The model to mapsonnetrequests to. Defaults toanthropic/claude-sonnet-4-20250514.SMALL_MODEL(Optional): The model to maphaikurequests to. Defaults toanthropic/claude-3-5-haiku-latest.
-
Run the server:
uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload
(
--reloadis optional, for development)
-
Install Claude Code (if you haven't already):
npm install -g @anthropic-ai/claude-code
-
Connect to your proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 claude
-
That's it! Your Claude Code client will now use the configured backend models (defaulting to OpenAI) through the proxy. 🎯
This setup allows you to chain proxies: Claude Code → This Proxy → LiteLLM Proxy → Multiple LLM Providers 🔄
The proxy automatically maps Claude models to every model available based on the configured model:
The proxy automatically adds the appropriate prefix to model names:
- OpenAI models get the
openai/prefix - Gemini models get the
gemini/prefix - The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists
For example:
gpt-4obecomesopenai/gpt-4ogemini-2.5-pro-preview-03-25becomesgemini/gemini-2.5-pro-preview-03-25- When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to
gemini/[model-name]
This proxy leverages LiteLLM to provide seamless translation between different LLM providers while maintaining Anthropic API compatibility.
The proxy is built on top of LiteLLM, a unified interface for calling 100+ LLM APIs. This provides several key benefits:
- Universal Provider Support: Connect to OpenAI, Google (Gemini), Anthropic, Azure, AWS Bedrock, and many other providers
- Automatic Format Translation: LiteLLM handles the conversion between different API formats automatically
- Consistent Response Structure: All providers return responses in a standardized format
- Built-in Error Handling: Robust error handling and retry logic across providers
The proxy works by:
- Receiving requests in Anthropic's API format 📥
- Model mapping - translates Claude model names (haiku/sonnet) to target provider models 🗺️
- LiteLLM processing - uses LiteLLM's unified interface to call the target provider 🔄
- Response formatting - converts LiteLLM's standardized response back to Anthropic format 🔄
- Returning the formatted response to the client ✅
- Streaming Support: Full support for streaming responses via LiteLLM's streaming capabilities
- Provider Fallbacks: Can fallback between providers (e.g., Google → OpenAI) if needed
- Model Prefix Handling: Automatically adds provider prefixes (
openai/,gemini/) for LiteLLM routing - Flexible Configuration: Environment-based configuration for easy provider switching
- Chain-able: Can proxy to other LiteLLM instances for complex routing scenarios
The proxy maintains full compatibility with all Claude clients while providing access to the entire LiteLLM ecosystem. 🌟
Contributions are welcome! Please feel free to submit a Pull Request. 🎁
