- Dual Interface: REST API and Model Context Protocol (MCP) server
- Two-Tier Memory: Working memory (session-scoped) and long-term memory (persistent)
- Configurable Memory Strategies: Customize how memories are extracted (discrete, summary, preferences, custom)
- Semantic Search: Vector-based similarity search with metadata filtering
- Flexible Backends: Pluggable vector store factory system
- Multi-Provider LLM Support: OpenAI, Anthropic, AWS Bedrock, Ollama, Azure, Gemini via LiteLLM
- AI Integration: Automatic topic extraction, entity recognition, and conversation summarization
- Python SDK: Easy integration with AI applications
Pre-built Docker images are available from:
- Docker Hub: redislabs/agent-memory-server
- GitHub Packages: ghcr.io/redis/agent-memory-server
Quick Start (Development Mode):
# Start with docker-compose
# Note: Both 'api' and 'api-for-task-worker' services use port 8000
# Choose one depending on your needs:
# Option 1: Development mode (no worker, immediate task execution)
docker compose up api redis
# Option 2: Production-like mode (with background worker)
docker compose up api-for-task-worker task-worker redis mcp
# Or run just the API server (requires separate Redis)
docker run -p 8000:8000 \
-e REDIS_URL=redis://your-redis:6379 \
-e OPENAI_API_KEY=your-key \
redislabs/agent-memory-server:latest \
agent-memory api --host 0.0.0.0 --port 8000 --task-backend=asyncioBy default, the image runs the API with the Docket task backend, which
expects a separate agent-memory task-worker process for non-blocking
background tasks. The example above shows how to override this to use the
asyncio backend for a single-container development setup.
Production Deployment:
For production, run separate containers for the API and background workers:
# API Server (without background worker)
docker run -p 8000:8000 \
-e REDIS_URL=redis://your-redis:6379 \
-e OPENAI_API_KEY=your-key \
-e DISABLE_AUTH=false \
redislabs/agent-memory-server:latest \
agent-memory api --host 0.0.0.0 --port 8000
# Background Worker (separate container)
docker run \
-e REDIS_URL=redis://your-redis:6379 \
-e OPENAI_API_KEY=your-key \
redislabs/agent-memory-server:latest \
agent-memory task-worker --concurrency 10
# MCP Server (if needed)
docker run -p 9000:9000 \
-e REDIS_URL=redis://your-redis:6379 \
-e OPENAI_API_KEY=your-key \
redislabs/agent-memory-server:latest \
agent-memory mcp --mode sse --port 9000# Install dependencies
pip install uv
uv install --all-extras
# Start Redis
docker-compose up redis
# Start the server (development mode, asyncio task backend)
uv run agent-memory api --task-backend=asyncioAllowing the server to extract memories from working memory is easiest. However, you can also manually create memories:
# Install the client
pip install agent-memory-client
# For LangChain integration
pip install agent-memory-client langchain-corefrom agent_memory_client import MemoryAPIClient
# Connect to server
client = MemoryAPIClient(base_url="http://localhost:8000")
# Store memories
await client.create_long_term_memories([
{
"text": "User prefers morning meetings",
"user_id": "user123",
"memory_type": "preference"
}
])
# Search memories
results = await client.search_long_term_memory(
text="What time does the user like meetings?",
user_id="user123"
)Note: While you can call client functions directly as shown above, using MCP or SDK-provided tool calls is recommended for AI agents as it provides better integration, automatic context management, and follows AI-native patterns. For the best performance, you can add messages to working memory and allow the server to extract memories in the background. See Memory Integration Patterns for guidance on when to use each approach.
For LangChain users, the SDK provides automatic conversion of memory client tools to LangChain-compatible tools, eliminating the need for manual wrapping with @tool decorators.
from agent_memory_client import create_memory_client
from agent_memory_client.integrations.langchain import get_memory_tools
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
# Get LangChain-compatible tools automatically
memory_client = await create_memory_client("http://localhost:8000")
tools = get_memory_tools(
memory_client=memory_client,
session_id="my_session",
user_id="alice"
)
# Create prompt and agent
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant with memory."),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
])
llm = ChatOpenAI(model="gpt-4o")
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
# Use the agent
result = await executor.ainvoke({"input": "Remember that I love pizza"})# Start MCP server (stdio mode - recommended for Claude Desktop)
uv run agent-memory mcp
# Or with SSE mode (development mode, default asyncio backend)
uv run agent-memory mcp --mode sse --port 9000Use this in your MCP tool configuration (e.g., Claude Desktop mcp.json):
{
"mcpServers": {
"memory": {
"command": "uvx",
"args": ["--from", "agent-memory-server", "agent-memory", "mcp"],
"env": {
"DISABLE_AUTH": "true",
"REDIS_URL": "redis://localhost:6379",
"OPENAI_API_KEY": "<your-openai-key>"
}
}
}
}Notes:
-
API keys: Set either
OPENAI_API_KEY(default models use OpenAI) or switch to Anthropic by settingANTHROPIC_API_KEYandGENERATION_MODELto an Anthropic model (e.g.,claude-3-5-haiku-20241022). -
Make sure your MCP host can find
uvx(on its PATH or by using an absolute command path).- macOS:
brew install uv - If not on PATH, set
"command"to the absolute path (e.g.,/opt/homebrew/bin/uvxon Apple Silicon,/usr/local/bin/uvxon Intel macOS). On Linux,~/.local/bin/uvxis common. See https://docs.astral.sh/uv/getting-started/
- macOS:
-
For production, remove
DISABLE_AUTHand configure proper authentication.
The server uses LiteLLM to support 100+ LLM providers. Configure via environment variables:
# OpenAI (default)
export OPENAI_API_KEY=sk-...
export GENERATION_MODEL=gpt-4o
export EMBEDDING_MODEL=text-embedding-3-small
# Anthropic
export ANTHROPIC_API_KEY=sk-ant-...
export GENERATION_MODEL=claude-3-5-sonnet-20241022
export EMBEDDING_MODEL=text-embedding-3-small # Use OpenAI for embeddings
# AWS Bedrock
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_REGION_NAME=us-east-1
export GENERATION_MODEL=anthropic.claude-sonnet-4-5-20250929-v1:0
export EMBEDDING_MODEL=bedrock/amazon.titan-embed-text-v2:0 # Note: bedrock/ prefix required
# Ollama (local)
export OLLAMA_API_BASE=http://localhost:11434
export GENERATION_MODEL=ollama/llama2
export EMBEDDING_MODEL=ollama/nomic-embed-text
export REDISVL_VECTOR_DIMENSIONS=768 # Required for OllamaSee LLM Providers for complete configuration options.
π Full Documentation - Complete guides, API reference, and examples
- Quick Start Guide - Get up and running in minutes
- Python SDK - Complete SDK reference with examples
- LangChain Integration - Automatic tool conversion for LangChain
- LLM Providers - Configure OpenAI, Anthropic, AWS Bedrock, Ollama, and more
- Embedding Providers - Configure embedding models for semantic search
- Vector Store Backends - Configure different vector databases
- Authentication - OAuth2/JWT setup for production
- Memory Types - Understanding semantic vs episodic memory
- API Reference - REST API endpoints
- MCP Protocol - Model Context Protocol integration
Working Memory (Session-scoped) β Long-term Memory (Persistent)
β β
- Messages - Semantic search
- Structured memories - Topic modeling
- Summary of past messages - Entity recognition
- Metadata - Deduplication
- AI Assistants: Persistent memory across conversations
- Customer Support: Context from previous interactions
- Personal AI: Learning user preferences and history
- Research Assistants: Accumulating knowledge over time
- Chatbots: Maintaining context and personalization
# Install dependencies
uv install --all-extras
# Run tests
uv run pytest
# Format code
uv run ruff format
uv run ruff check
# Start development stack (choose one based on your needs)
docker compose up api redis # Development mode
docker compose up api-for-task-worker task-worker redis # Production-like modeApache License 2.0 - see LICENSE file for details.
We welcome contributions! Please see the development documentation for guidelines.