Build python production-ready conversational AI applications in minutes, not weeks ⚡️
⚠️ Notice: Chainlit is now community-maintained.As of May 1st 2025, the original Chainlit team has stepped back from active development. The project is maintained by @Chainlit/chainlit-maintainers under a formal Maintainer Agreement.
Maintainers are responsible for code review, releases, and security.
Chainlit SAS provides no warranties on future updates.Want to help maintain? Apply here →
Website • Documentation • Chainlit Help • Cookbook
overview-chainlit.mp4
Open a terminal and run:
pip install chainlit
chainlit helloIf this opens the hello app in your browser, you're all set!
Clone this repository and populate the shared .env file before running any of the smoke tests:
python3 scripts/start_local.py --smoke-testThe helper script mirrors docs/local-setup.md by:
- Reading the credential inventory from
.env.exampleso API keys remain DRY across developers and CI. - Reusing any values already set in
.envor exported in your shell, prompting only for the remainder (press Enter to keep documented defaults or leave them blank). Pass--non-interactiveto skip prompts entirely in automation while still surfacing any empty secrets at the end. - Running
scripts/smoke_test.pywhen invoked with--smoke-testso the repository matches the documented workflow across local shells, GitHub Actions, and Cloud Build. The helper waits forchainlit hello --ci --headlessto report readiness, verifies it serves HTTP, and tears it down automatically so automation never hangs. It still falls back topnpm install/uv sync --extra mypywithout--frozenif drift is detected and reminds you to reconcile the lockfiles so mypy stubs remain available for Husky hooks.
On GCP we recommend storing the same variables in Secret Manager and mounting them into Cloud Run/Cloud Functions to avoid duplicating secrets per environment. After generating .env, mirror it to Secret Manager so automation stays DRY:
python3 scripts/sync_env_to_gcp.py --createThe sync helper writes the exact .env content documented above, enabling Cloud Build (see cloudbuild/smoke-test.yaml) and GitHub Actions to run the shared smoke test without redefining secrets.
| File | Purpose |
|---|---|
README.md |
High-level project overview and quickstart commands. |
docs/local-setup.md |
Minimal smoke test to validate local installs using shared .env values and reproducible package managers. |
docs/llms.txt |
Checklist of LLM provider environment variables to keep in centralized secrets (locally via .env, in production via GCP Secret Manager). |
AGENTS.md |
Guidance for AI contributors to keep workflows DRY and aligned with GCP deployment practices. |
The latest in-development version can be installed straight from GitHub with:
pip install git+https://github.com/Chainlit/chainlit.git#subdirectory=backend/(Requires Node and pnpm installed on the system.)
Create a new file demo.py with the following code:
import chainlit as cl
@cl.step(type="tool")
async def tool():
# Fake tool
await cl.sleep(2)
return "Response from the tool!"
@cl.on_message # this function will be called every time a user inputs a message in the UI
async def main(message: cl.Message):
"""
This function is called every time a user inputs a message in the UI.
It sends back an intermediate response from the tool, followed by the final answer.
Args:
message: The user's message.
Returns:
None.
"""
# Call the tool
tool_res = await tool()
await cl.Message(content=tool_res).send()Now run it!
chainlit run demo.py -wYou can find various examples of Chainlit apps here that leverage tools and services such as OpenAI, Anthropiс, LangChain, LlamaIndex, ChromaDB, Pinecone and more.
Chainlit now ships with first-class support for the official Google GenAI SDK so you can build Gemini or Vertex AI agents without additional glue code. Install google-genai (or the legacy google-generativeai) alongside your Chainlit app and call:
import chainlit as cl
import os
cl.instrument_google_genai()
from google import genai
client = genai.Client(
api_key=os.environ.get("GEMINI_API_KEY"),
# or: project=os.environ["VERTEX_PROJECT_ID"], location="us-central1"
)
response = client.responses.generate(
model="models/gemini-1.5-flash",
contents="Draft a warm welcome message for new users",
)
@cl.on_message
async def on_message(message: cl.Message):
await cl.Message(content=response.output_text).send()Every SDK call (including Agent Developer Kit client.agents.* helpers) automatically appears as an LLM step inside the Chainlit UI, capturing prompts, outputs, timing information and metadata. Require GEMINI_API_KEY, VERTEX_PROJECT_ID, or other credentials from your end users via config.toml's user_env list to collect them securely at runtime.
Tell us what you would like to see added in Chainlit using the Github issues or on Discord.
As an open-source initiative in a rapidly evolving domain, we welcome contributions, be it through the addition of new features or the improvement of documentation.
For detailed information on how to contribute, see here.
Chainlit is open-source and licensed under the Apache 2.0 license.
