The only CLI you'll ever need!
Gemini CLI or Claude Code? Why not both—and a bit more.
LangCode brings Gemini, Anthropic, OpenAI, and Ollama together with ReAct & Deep modes, fully inline, anywhere you need it.- Interactive Launcher: Start with
langcodeand configure everything through a user-friendly interface. - AI-Powered Code Understanding: Deeply analyzes your code to answer questions and generate insights.
- Automated Coding Tasks: Implements features, fixes bugs, and refactors code with minimal effort.
- Safe and Reviewable Changes: Generates clear diffs for every modification, ensuring you stay in control.
- Multi-LLM Support: Seamlessly integrates with Google Gemini and Anthropic Claude, intelligently routing to the best model for the task.
- Customizable Instructions: Tailor the agent's behavior with project-specific rules and guidelines using
.langcode/langcode.md. - Extensible with MCP: Integrate custom tools via Model Context Protocol (MCP).
- Installation:
pip install langchain-code
- Launch the Interactive Launcher:
Just type
langcodein your terminal and hit Enter. This opens a user-friendly interactive menu where you can easily configure your session and access various functionalities without needing to remember specific command-line arguments. See the image shown above.
The interactive mode serves as the central hub for all your coding tasks. It allows you to:
- Choose a Command: Select what you want to do:
chat,feature,fix, oranalyze. - Configure the Engine: Pick between
react(fast and efficient) anddeep(for complex tasks). - Enable Smart Routing: Let LangCode automatically select the best LLM for each task.
- Set the Priority: Optimize for
cost,speed, orqualitywhen using smart routing. - Manage Autopilot: Enable fully autonomous mode for the Deep Agent (use with caution!).
- Toggle Apply Mode: Allow LangCode to automatically write changes to your file system.
- Select an LLM: Explicitly choose between Anthropic and Google Gemini, or let LangCode decide.
- Specify the Project Directory: Tell LangCode where your codebase is located.
- Edit Environment Variables: Quickly add or modify API keys and other settings in your
.envfile. - Customize Instructions: Open the
.langcode/langcode.mdfile to add project-specific guidelines. - Configure MCP Servers: Set up Model Context Protocol (MCP) servers for advanced tool integration.
- Edit Language Code: Modify the core language code directly from the main window.
- Specify MCP Servers: Configure Model Context Protocol (MCP) servers for advanced tool integration.
- Define a Test Command: Specify a command to run after making changes (e.g.,
pytest -q). - Access Help: Press
hto toggle help andqorEscto quit.
While the interactive launcher is the recommended way to use LangCode, you can also use the following commands directly from the terminal:
langcode chat: Starts an interactive chat session.langcode feature: Implements a new feature.langcode fix: Fixes a bug.langcode analyze: Analyzes the codebase.langcode instr: Opens the project instructions file.
See the "Commands" section below for more details.
You can use the following slash commands during a chat session:
/clear: Clears the chat history./select: Returns to the interactive launcher./exitor/quit: Ends the chat session./help: Displays a help message./memoryor/stats: Shows memory and statistics about the current session (deep mode only)./env: Allows you to create an environment if one doesn't exist.
LangCode uses a few key files for configuration:
.env: Stores API keys and other environment variables. You can create and edit this file directly from the interactive launcher..langcode/langcode.md: Contains project-specific instructions for the agent. Edit this file to tailor LangCode's behavior to your codebase..langcode/mcp.json: Configures Model Context Protocol (MCP) servers for advanced tool integration.
LangCode provides a set of commands to interact with your codebase. Here's a breakdown of each command and its usage:
Starts an interactive chat session with the agent. This is useful for exploring the codebase, asking questions, and making scoped edits.
langcode chat [--llm <provider>] [--mode <react|deep>] [--auto] [--router] [--priority <balanced|cost|speed|quality>] [--project-dir <path>]--llm: Specifies the LLM provider to use (anthropic or gemini). If not specified, LangCode will automatically select a provider based on availability and router policy.--mode: Selects the reasoning engine to use:react(default) ordeep. Thedeepmode enables more complex, multi-step reasoning.--auto: (Deep mode only) Enables autopilot mode, where the agent plans and executes tasks end-to-end without asking for confirmation.--router: Enables smart model routing, which automatically picks the most efficient LLM per prompt based on the specified priority.--priority: Sets the routing priority:balanced(default),cost,speed, orquality. This influences the model choice when routing is enabled.--project-dir: Specifies the project directory to operate in. Defaults to the current directory.
Example:
langcode chat --llm gemini --mode deep --autoThis command starts a chat session using the Gemini LLM in deep mode with autopilot enabled.
Implements a feature end-to-end, following a plan -> edit -> verify workflow. This command is designed to automate the process of adding new features to your codebase.
langcode feature <request> [--test-cmd <command>] [--apply] [--llm <provider>] [--router] [--priority <balanced|cost|speed|quality>] [--project-dir <path>]<request>: A description of the feature to implement (e.g., "Add a dark mode toggle").--test-cmd: Specifies a command to run to verify the changes (e.g.,pytest -q).--apply: Applies the changes and runs the test command without prompting for confirmation.--llm: Specifies the LLM provider to use (anthropic or gemini).--router: Enables smart model routing.--priority: Sets the routing priority.--project-dir: Specifies the project directory.
Example:
langcode feature "Implement user authentication with JWT" --test-cmd "pytest -q" --applyThis command implements user authentication with JWT, runs the tests, and applies the changes without prompting.
Diagnoses and fixes a bug, following a trace -> pinpoint -> patch -> test workflow. This command helps automate the process of fixing bugs in your codebase.
langcode fix <request> [--log <file>] [--test-cmd <command>] [--apply] [--llm <provider>] [--router] [--priority <balanced|cost|speed|quality>] [--project-dir <path>]<request>: A description of the bug to fix (e.g., "Fix crash on image upload").--log: Specifies the path to an error log or stack trace.--test-cmd: Specifies a command to run to verify the fix.--apply: Applies the changes and runs the test command without prompting for confirmation.--llm: Specifies the LLM provider to use.--router: Enables smart model routing.--priority: Sets the routing priority.--project-dir: Specifies the project directory.
Example:
langcode fix "Resolve memory leak in image processing module" --log memory_leak.log --test-cmd "pytest -q"This command resolves a memory leak using the provided log file and runs the tests to verify the fix.
Analyzes the codebase and generates insights using the Deep Agent architecture. This command is useful for getting an overview of the project and understanding its architecture.
langcode analyze <request> [--llm <provider>] [--router] [--priority <balanced|cost|speed|quality>] [--project-dir <path>]<request>: A question or request for analysis (e.g., "What are the main components of this project?").--llm: Specifies the LLM provider to use.--router: Enables smart model routing.--priority: Sets the routing priority.--project-dir: Specifies the project directory.
Example:
langcode analyze "Explain the data flow in the user authentication module"This command analyzes the user authentication module and explains the data flow.
Opens or creates the project-specific instructions file (.langcode/langcode.md) in your editor. This file allows you to provide custom instructions and guidelines for the agent to follow.
langcode instr [--project-dir <path>]--project-dir: Specifies the project directory.
Example:
langcode instrThis command opens the .langcode/langcode.md file in your default editor.
auto– DEEP AUTOPILOT end‑to‑end executionbug_fix– guided diagnosis → patch → verifyfeature_impl– plan → small diffs → test → review
A rule‑augmented, feedback‑aware router picks the right model for each task based on complexity, context size, latency, and cost. Use --llm to override.
The ReAct Agent is a fast loop for chat, reads, and targeted edits. It follows the ReAct (Reasoning and Acting) framework, where the agent reasons about the current state, decides on an action, and then observes the result of that action. This process is repeated until the agent reaches a conclusion or the maximum number of iterations is reached.
The Deep Agent is a structured, multi-agent system for complex work. It uses a LangGraph-style architecture to execute multi-step, long-horizon tasks. The Deep Agent consists of several sub-agents, each responsible for a specific task, such as research, code generation, and Git operations.
-
ReAct Agent – fast loop for chat, reads, and targeted edits.
-
Deep Agent – structured, multi‑agent system for complex work:
research-agent– gathers local and web contextcode-agent– generates minimal, verifiable diffs and runs testsgit-agent– stages changes and crafts commits
- Todos – managed via
write_todos - Virtual files – stage edits safely before touching disk
Run fully autonomously with a single final report. Combine with tests for safe, end‑to‑end changes.
list_dir,glob,grep– fast project navigationread_file,edit_by_diff,write_file– safe, reviewable editsrun_cmd– gated shell executionprocess_multimodal– text + image understanding in chatwrite_todos– persistent plan items for Deep Agent
Extend LangCode with external capabilities using Model Context Protocol. Define servers in ~/.langcode/mcp.json or ./.langcode/mcp.json.
Example mcp.json snippet:
{
"servers": {
"github": { "command": "mcp-github" },
"search": { "command": "mcp-search" }
}
}Invoke tools naturally in chat or within Deep Agent plans once configured.
Use the /img directive to analyze images alongside your prompt:
/img assets/screenshot.png :: What’s wrong with this UI?
- Preview all diffs and commands before execution
- Explicit confirmation gates by default
--applyfor non‑interactive runs- Minimal diffs for clear reviews and quick rollbacks
Summarize a repo:
langcode chat -p "Summarize this codebase and list risky areas"Refactor with tests:
langcode feature "Switch user IDs to UUID" --test-cmd "pytest -q"Patch from stack trace:
langcode fix --log crash.log --test-cmd "npm test"- Run with
--verboseto inspect tool calls and decisions - Ensure provider keys are present in
.env - Narrow scope or provide
--include-directoriesfor large monorepos - File an issue with a minimal reproduction
LangCode can be configured via environment variables and project-specific files. The following environment variables are supported:
ANTHROPIC_API_KEY: API key for Anthropic Claude.GOOGLE_API_KEY: API key for Google Gemini.
Project-specific configuration files:
./.langcode/mcp.json: Configuration for Model Context Protocol (MCP) servers../.langcode/: Directory for project-specific context or defaults.
LangCode can be extended with custom tools using the Model Context Protocol (MCP). To create a custom tool, you need to define a server that implements the MCP protocol and register it in the mcp.json file. Once configured, you can invoke the tool naturally in chat or within Deep Agent plans.
Issues and PRs are welcome. Please open an issue to discuss substantial changes before submitting a PR. See CONTRIBUTING.md for guidelines.
Apache 2.0 See LICENSE.
LangCode draws inspiration from the design and developer experience of Google’s Gemini CLI and Anthropic’s Claude Code, unified into a single, streamlined tool.


