Federated Agents Versioned Audit Trail β VCS-backed memory for AI agents via MCP.
Every thought, decision, and observation is stored as a markdown file with YAML frontmatter, tracked in a Jujutsu (JJ) colocated git monorepo. Agents interact through MCP tools β they never see VCS commands.
- Supersession tracking β when an agent corrects a belief, the old version is hidden from default recall. No contradictory memories.
- Draft isolation β working thoughts stay in
drafts/. Other agents only see promoted thoughts. - Trust Gate β an LLM-based reviewer validates thoughts before they enter shared truth. Hallucinations stay contained in draft.
- Full lineage β every thought carries who wrote it, when, and why it changed.
- Crash-proof β JJ auto-snapshots. No unsaved work.
- Engine/Fuel split β this repo is the engine (stateless MCP server). Your data lives in a separate repo you control.
Install Jujutsu (JJ) β FAVA Trails uses JJ as its VCS engine:
fava-trails install-jjOr install manually from jj-vcs.github.io/jj.
pip install fava-trailsgit clone https://github.com/MachineWisdomAI/fava-trails.git
cd fava-trails
uv syncNew data repo (from scratch):
# Create an empty repo on GitHub (or any git remote), then clone it
git clone https://github.com/YOUR-ORG/fava-trails-data.git
# Bootstrap it (creates config, .gitignore, initializes JJ)
fava-trails bootstrap fava-trails-dataExisting data repo (clone from remote):
fava-trails clone https://github.com/YOUR-ORG/fava-trails-data.git fava-trails-dataAdd to your MCP client config:
- Claude Code CLI:
~/.claude.json(top-levelmcpServerskey) - Claude Desktop:
claude_desktop_config.json
If installed from PyPI:
{
"mcpServers": {
"fava-trails": {
"command": "fava-trails-server",
"env": {
"FAVA_TRAILS_DATA_REPO": "/path/to/fava-trails-data",
"OPENROUTER_API_KEY": "sk-or-v1-..."
}
}
}
}If installed from source:
{
"mcpServers": {
"fava-trails": {
"type": "stdio",
"command": "uv",
"args": ["run", "--directory", "/path/to/fava-trails", "fava-trails-server"],
"env": {
"FAVA_TRAILS_DATA_REPO": "/path/to/fava-trails-data",
"OPENROUTER_API_KEY": "sk-or-v1-..."
}
}
}
}For Claude Desktop on Windows (accessing WSL):
{
"mcpServers": {
"fava-trails": {
"command": "wsl.exe",
"args": [
"-e", "bash", "-lc",
"FAVA_TRAILS_DATA_REPO=/path/to/fava-trails-data OPENROUTER_API_KEY=sk-or-v1-... fava-trails-server"
]
}
}
}The Trust Gate uses LLM verification: Thoughts are reviewed before promotion to ensure they're coherent and safe. By default, FAVA Trails uses OpenRouter to access 300β500+ models from 60+ providers including Anthropic, OpenAI, Google, Qwen, and others. Get a free API key at openrouter.ai/keys. The default model (
google/gemini-2.5-flash) costs ~$0.001 per review. Multi-provider support via any-llm-sdk enables switching to other providers by modifyingconfig.yaml.
Agents call MCP tools. Core workflow:
save_thought(trail_name="myorg/eng/my-project", content="My finding about X", source_type="observation")
β creates a draft in drafts/
propose_truth(trail_name="myorg/eng/my-project", thought_id=thought_id)
β promotes to observations/ (visible to all agents)
recall(trail_name="myorg/eng/my-project", query="X")
β finds the promoted thought
Agents interact through MCP tools β they never see VCS commands. JJ expertise is not required.
FAVA Trails uses git remotes for cross-machine sync. The fava-trails bootstrap command sets push_strategy: immediate which auto-pushes after every write.
# 1. Install FAVA Trails
pip install fava-trails
# 2. Install JJ
fava-trails install-jj
# 3. Clone the SAME data repo (handles colocated mode + bookmark tracking)
fava-trails clone https://github.com/YOUR-ORG/fava-trails-data.git fava-trails-data
# 4. Register MCP (same config as above, with local paths)Both machines push/pull through the same git remote. Use the sync MCP tool to pull latest thoughts from other machines.
cd /path/to/fava-trails-data
jj bookmark set main -r @-
jj git push --bookmark mainNEVER use git push origin main after JJ colocates β it misses thought commits. See AGENTS_SETUP_INSTRUCTIONS.md for the correct protocol.
fava-trails (this repo) fava-trails-data (your repo)
βββ src/fava_trails/ βββ config.yaml
β βββ server.py βββ MCP ββββββ .gitignore
β βββ cli.py βββ trails/
β βββ trail.py βββ myorg/eng/project/
β βββ config.py βββ thoughts/
β βββ trust_gate.py βββ drafts/
β βββ hook_manifest.py βββ decisions/
β βββ protocols/ βββ observations/
β β βββ secom/ βββ preferences/
β βββ vcs/
β βββ jj_backend.py
βββ tests/
- Engine (
fava-trails) β stateless MCP server, Apache-2.0. Install viapip install fava-trails. - Fuel (
fava-trails-data) β your organization's trail data, private.
Environment variables:
| Variable | Read by | Purpose | Default |
|---|---|---|---|
FAVA_TRAILS_DATA_REPO |
Server | Root directory for trail data (monorepo root) | ~/.fava-trails |
FAVA_TRAILS_DIR |
Server | Override trails directory location (absolute path) | $FAVA_TRAILS_DATA_REPO/trails |
FAVA_TRAILS_SCOPE_HINT |
Server | Broad scope hint baked into tool descriptions | (none) |
FAVA_TRAILS_SCOPE |
Agent | Project-specific scope from .env file |
(none) |
OPENROUTER_API_KEY |
Server | API key for Trust Gate LLM reviews via OpenRouter | (none β required for propose_truth) |
LLM Provider: FAVA Trails uses any-llm-sdk for unified LLM access. OpenRouter is the default provider (recommended for simplicity β single API key, 300β500+ models from 60+ providers). Additional providers (Anthropic, OpenAI, Bedrock, etc.) can be configured in config.yaml for future versions.
The server reads $FAVA_TRAILS_DATA_REPO/config.yaml for global settings. Minimal config.yaml:
trails_dir: trails # relative to FAVA_TRAILS_DATA_REPO
remote_url: null # git remote URL (optional)
push_strategy: manual # manual | immediateWhen push_strategy: immediate, the server auto-pushes after every successful write. Push failures are non-fatal.
See AGENTS_SETUP_INSTRUCTIONS.md for full config reference including trust gate and per-trail overrides.
FAVA Trails supports optional lifecycle protocols β hook modules that run custom logic at key points in the thought lifecycle (save, promote, recall). Protocols are registered in your data repo's config.yaml and loaded at server startup.
Extractive token-level compression via LLMLingua-2, based on the SECOM paper (Tsinghua University and Microsoft, ICLR 2025). Thoughts are compressed once at promote time (WORM pattern), reducing storage and boosting recall density. Purely extractive β only original tokens survive, no paraphrasing or rewriting.
pip install fava-trails[secom]Add to your data repo's config.yaml:
hooks:
- module: fava_trails.protocols.secom
points: [before_propose, before_save, on_recall]
order: 20
fail_mode: open
config:
compression_threshold_chars: 500
target_compress_rate: 0.6
compression_engine:
type: llmlinguaStructured data: SECOM's token-level compression has no notion of syntactic validity β JSON objects, YAML blocks, and fenced code blocks may be silently destroyed at promote time. Tag thoughts with secom-skip to opt out:
save_thought(trail_name="my/scope", content='{"phases": [...]}', metadata={"tags": ["secom-skip"]})The before_save hook warns when structured content is detected without secom-skip.
See protocols/secom/README.md for full config reference, model options, and the secom-skip opt-out. See AGENTS_SETUP_INSTRUCTIONS.md for the general hooks system.
Quick setup via CLI:
# Print default config (copy-paste into config.yaml)
fava-trails secom setup
# Write config directly + commit with jj
fava-trails secom setup --write
# Pre-download model to avoid first-use delay
fava-trails secom warmupPlaybook-driven reranking and anti-pattern detection, based on ACE (arXiv:2510.04618) (Stanford, UC Berkeley, and SambaNova, ICLR 2026). Applies multiplicative scoring using rules stored in the preferences/ namespace.
pip install fava-trails # included in base installAdd to your data repo's config.yaml:
hooks:
- module: fava_trails.protocols.ace
points: [on_startup, on_recall, before_save, after_save, after_propose, after_supersede]
order: 10
fail_mode: open
config:
playbook_namespace: preferences
telemetry_max_per_scope: 10000Quick setup via CLI:
fava-trails ace setup # print default config
fava-trails ace setup --write # write + jj commitLifecycle hooks for MIT RLM (arXiv:2512.24601) MapReduce workflows. Validates mapper outputs, tracks batch progress, and sorts results deterministically for reducer consumption.
pip install fava-trails # included in base installAdd to your data repo's config.yaml:
hooks:
- module: fava_trails.protocols.rlm
points: [before_save, after_save, on_recall]
order: 15
fail_mode: closed
config:
expected_mappers: 5
min_mapper_output_chars: 20Quick setup via CLI:
fava-trails rlm setup # print default config
fava-trails rlm setup --write # write + jj commituv run pytest -v # run tests
uv run pytest --cov # with coverage- AGENTS.md β Agent-facing: MCP tools reference, scope discovery, thought lifecycle, agent conventions
- AGENTS_USAGE_INSTRUCTIONS.md β Canonical usage: scope discovery, session protocol, agent identity
- AGENTS_SETUP_INSTRUCTIONS.md β Data repo setup, config reference, trust gate prompts, lifecycle hooks
- protocols/secom/README.md β SECOM compression protocol: config, models, WORM architecture
- docs/fava_trails_faq.md β Detailed FAQ for framework authors and ML engineers
See CONTRIBUTING.md for setup instructions, how to run tests, and PR expectations.
See CHANGELOG.md for release history.