Skip to content

MachineWisdomAI/fava-trails

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

228 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

PyPI License Tests Python Views

FAVA Trails

Federated Agents Versioned Audit Trail β€” VCS-backed memory for AI agents via MCP.

Every thought, decision, and observation is stored as a markdown file with YAML frontmatter, tracked in a Jujutsu (JJ) colocated git monorepo. Agents interact through MCP tools β€” they never see VCS commands.

Why

  • Supersession tracking β€” when an agent corrects a belief, the old version is hidden from default recall. No contradictory memories.
  • Draft isolation β€” working thoughts stay in drafts/. Other agents only see promoted thoughts.
  • Trust Gate β€” an LLM-based reviewer validates thoughts before they enter shared truth. Hallucinations stay contained in draft.
  • Full lineage β€” every thought carries who wrote it, when, and why it changed.
  • Crash-proof β€” JJ auto-snapshots. No unsaved work.
  • Engine/Fuel split β€” this repo is the engine (stateless MCP server). Your data lives in a separate repo you control.

Install

Prerequisites

Install Jujutsu (JJ) β€” FAVA Trails uses JJ as its VCS engine:

fava-trails install-jj

Or install manually from jj-vcs.github.io/jj.

From PyPI (recommended)

pip install fava-trails

From source (for development)

git clone https://github.com/MachineWisdomAI/fava-trails.git
cd fava-trails
uv sync

Quick Start

Set up your data repo

New data repo (from scratch):

# Create an empty repo on GitHub (or any git remote), then clone it
git clone https://github.com/YOUR-ORG/fava-trails-data.git

# Bootstrap it (creates config, .gitignore, initializes JJ)
fava-trails bootstrap fava-trails-data

Existing data repo (clone from remote):

fava-trails clone https://github.com/YOUR-ORG/fava-trails-data.git fava-trails-data

Register the MCP server

Add to your MCP client config:

  • Claude Code CLI: ~/.claude.json (top-level mcpServers key)
  • Claude Desktop: claude_desktop_config.json

If installed from PyPI:

{
  "mcpServers": {
    "fava-trails": {
      "command": "fava-trails-server",
      "env": {
        "FAVA_TRAILS_DATA_REPO": "/path/to/fava-trails-data",
        "OPENROUTER_API_KEY": "sk-or-v1-..."
      }
    }
  }
}

If installed from source:

{
  "mcpServers": {
    "fava-trails": {
      "type": "stdio",
      "command": "uv",
      "args": ["run", "--directory", "/path/to/fava-trails", "fava-trails-server"],
      "env": {
        "FAVA_TRAILS_DATA_REPO": "/path/to/fava-trails-data",
        "OPENROUTER_API_KEY": "sk-or-v1-..."
      }
    }
  }
}

For Claude Desktop on Windows (accessing WSL):

{
  "mcpServers": {
    "fava-trails": {
      "command": "wsl.exe",
      "args": [
        "-e", "bash", "-lc",
        "FAVA_TRAILS_DATA_REPO=/path/to/fava-trails-data OPENROUTER_API_KEY=sk-or-v1-... fava-trails-server"
      ]
    }
  }
}

The Trust Gate uses LLM verification: Thoughts are reviewed before promotion to ensure they're coherent and safe. By default, FAVA Trails uses OpenRouter to access 300–500+ models from 60+ providers including Anthropic, OpenAI, Google, Qwen, and others. Get a free API key at openrouter.ai/keys. The default model (google/gemini-2.5-flash) costs ~$0.001 per review. Multi-provider support via any-llm-sdk enables switching to other providers by modifying config.yaml.

Use it

Agents call MCP tools. Core workflow:

save_thought(trail_name="myorg/eng/my-project", content="My finding about X", source_type="observation")
  β†’ creates a draft in drafts/

propose_truth(trail_name="myorg/eng/my-project", thought_id=thought_id)
  β†’ promotes to observations/ (visible to all agents)

recall(trail_name="myorg/eng/my-project", query="X")
  β†’ finds the promoted thought

Agents interact through MCP tools β€” they never see VCS commands. JJ expertise is not required.

Cross-Machine Sync

FAVA Trails uses git remotes for cross-machine sync. The fava-trails bootstrap command sets push_strategy: immediate which auto-pushes after every write.

Setting up a second machine

# 1. Install FAVA Trails
pip install fava-trails

# 2. Install JJ
fava-trails install-jj

# 3. Clone the SAME data repo (handles colocated mode + bookmark tracking)
fava-trails clone https://github.com/YOUR-ORG/fava-trails-data.git fava-trails-data

# 4. Register MCP (same config as above, with local paths)

Both machines push/pull through the same git remote. Use the sync MCP tool to pull latest thoughts from other machines.

Manual push (if auto-push is off)

cd /path/to/fava-trails-data
jj bookmark set main -r @-
jj git push --bookmark main

NEVER use git push origin main after JJ colocates β€” it misses thought commits. See AGENTS_SETUP_INSTRUCTIONS.md for the correct protocol.

Architecture

fava-trails (this repo)        fava-trails-data (your repo)
β”œβ”€β”€ src/fava_trails/           β”œβ”€β”€ config.yaml
β”‚   β”œβ”€β”€ server.py  ←── MCP β”€β”€β†’β”œβ”€β”€ .gitignore
β”‚   β”œβ”€β”€ cli.py                 └── trails/
β”‚   β”œβ”€β”€ trail.py                   └── myorg/eng/project/
β”‚   β”œβ”€β”€ config.py                      └── thoughts/
β”‚   β”œβ”€β”€ trust_gate.py                      β”œβ”€β”€ drafts/
β”‚   β”œβ”€β”€ hook_manifest.py                   β”œβ”€β”€ decisions/
β”‚   β”œβ”€β”€ protocols/                         β”œβ”€β”€ observations/
β”‚   β”‚   └── secom/                         └── preferences/
β”‚   └── vcs/
β”‚       └── jj_backend.py
└── tests/
  • Engine (fava-trails) β€” stateless MCP server, Apache-2.0. Install via pip install fava-trails.
  • Fuel (fava-trails-data) β€” your organization's trail data, private.

Configuration

Environment variables:

Variable Read by Purpose Default
FAVA_TRAILS_DATA_REPO Server Root directory for trail data (monorepo root) ~/.fava-trails
FAVA_TRAILS_DIR Server Override trails directory location (absolute path) $FAVA_TRAILS_DATA_REPO/trails
FAVA_TRAILS_SCOPE_HINT Server Broad scope hint baked into tool descriptions (none)
FAVA_TRAILS_SCOPE Agent Project-specific scope from .env file (none)
OPENROUTER_API_KEY Server API key for Trust Gate LLM reviews via OpenRouter (none β€” required for propose_truth)

LLM Provider: FAVA Trails uses any-llm-sdk for unified LLM access. OpenRouter is the default provider (recommended for simplicity β€” single API key, 300–500+ models from 60+ providers). Additional providers (Anthropic, OpenAI, Bedrock, etc.) can be configured in config.yaml for future versions.

The server reads $FAVA_TRAILS_DATA_REPO/config.yaml for global settings. Minimal config.yaml:

trails_dir: trails          # relative to FAVA_TRAILS_DATA_REPO
remote_url: null            # git remote URL (optional)
push_strategy: manual       # manual | immediate

When push_strategy: immediate, the server auto-pushes after every successful write. Push failures are non-fatal.

See AGENTS_SETUP_INSTRUCTIONS.md for full config reference including trust gate and per-trail overrides.

Protocols

FAVA Trails supports optional lifecycle protocols β€” hook modules that run custom logic at key points in the thought lifecycle (save, promote, recall). Protocols are registered in your data repo's config.yaml and loaded at server startup.

SECOM β€” Compression at Promote Time

Extractive token-level compression via LLMLingua-2, based on the SECOM paper (Tsinghua University and Microsoft, ICLR 2025). Thoughts are compressed once at promote time (WORM pattern), reducing storage and boosting recall density. Purely extractive β€” only original tokens survive, no paraphrasing or rewriting.

pip install fava-trails[secom]

Add to your data repo's config.yaml:

hooks:
  - module: fava_trails.protocols.secom
    points: [before_propose, before_save, on_recall]
    order: 20
    fail_mode: open
    config:
      compression_threshold_chars: 500
      target_compress_rate: 0.6
      compression_engine:
        type: llmlingua

Structured data: SECOM's token-level compression has no notion of syntactic validity β€” JSON objects, YAML blocks, and fenced code blocks may be silently destroyed at promote time. Tag thoughts with secom-skip to opt out:

save_thought(trail_name="my/scope", content='{"phases": [...]}', metadata={"tags": ["secom-skip"]})

The before_save hook warns when structured content is detected without secom-skip.

See protocols/secom/README.md for full config reference, model options, and the secom-skip opt-out. See AGENTS_SETUP_INSTRUCTIONS.md for the general hooks system.

Quick setup via CLI:

# Print default config (copy-paste into config.yaml)
fava-trails secom setup

# Write config directly + commit with jj
fava-trails secom setup --write

# Pre-download model to avoid first-use delay
fava-trails secom warmup

ACE β€” Agentic Context Engineering

Playbook-driven reranking and anti-pattern detection, based on ACE (arXiv:2510.04618) (Stanford, UC Berkeley, and SambaNova, ICLR 2026). Applies multiplicative scoring using rules stored in the preferences/ namespace.

pip install fava-trails  # included in base install

Add to your data repo's config.yaml:

hooks:
  - module: fava_trails.protocols.ace
    points: [on_startup, on_recall, before_save, after_save, after_propose, after_supersede]
    order: 10
    fail_mode: open
    config:
      playbook_namespace: preferences
      telemetry_max_per_scope: 10000

Quick setup via CLI:

fava-trails ace setup           # print default config
fava-trails ace setup --write   # write + jj commit

RLM β€” MapReduce Orchestration

Lifecycle hooks for MIT RLM (arXiv:2512.24601) MapReduce workflows. Validates mapper outputs, tracks batch progress, and sorts results deterministically for reducer consumption.

pip install fava-trails  # included in base install

Add to your data repo's config.yaml:

hooks:
  - module: fava_trails.protocols.rlm
    points: [before_save, after_save, on_recall]
    order: 15
    fail_mode: closed
    config:
      expected_mappers: 5
      min_mapper_output_chars: 20

Quick setup via CLI:

fava-trails rlm setup           # print default config
fava-trails rlm setup --write   # write + jj commit

Development

uv run pytest -v          # run tests
uv run pytest --cov       # with coverage

Docs

Contributing

See CONTRIBUTING.md for setup instructions, how to run tests, and PR expectations.

See CHANGELOG.md for release history.

About

πŸ«›πŸ‘£ FAVA Trails β€” versioned, auditable memory for AI agents via MCP. Backed by Jujutsu VCS.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Contributors