Decision Infrastructure for AI agents.
Stop agents before they make expensive mistakes.
Try it in 10 seconds
npx dashclaw-demoNo setup. Opens Decision Replay automatically.
Works with:
LangChain • CrewAI • OpenClaw • OpenAI • Anthropic • AutoGen • Claude Code • Codex • Gemini CLI • Custom agents
Intercept decisions. Enforce policies. Record evidence.
Agent → DashClaw → External Systems
DashClaw sits between your agents and your external systems. It evaluates policies before an agent action executes and records verifiable evidence of every decision.
DashClaw is not observability. It is control before execution.
AI agents generate actions from goals and context. They do not follow deterministic code paths. Therefore debugging alone is insufficient. Agents require governance.
DashClaw provides decision infrastructure to:
- Intercept risky agent actions.
- Enforce policy checks before execution.
- Require human approval (HITL) for sensitive operations.
- Record verifiable decision evidence to detect reasoning drift.
Run DashClaw instantly with one command.
npx dashclaw-demoWhat happens:
- A local DashClaw demo runtime starts automatically.
- A demo agent attempts a high-risk production deploy.
- DashClaw intercepts the decision and blocks the action before execution.
- Your browser opens directly to the Decision Replay showing the governance trail.
No repo clone. No environment variables. No configuration. Just one command.
- 🔴 High risk score (85)
- 🛑 Policy requires approval before deploy
- 🧠 Assumptions recorded by the agent
- 📊 Full decision timeline with outcome
Mission Control — Real-time strategic posture, decision timeline, and intervention feed.
Approval Queue — Human-in-the-loop intervention with risk scores and one-click Allow / Deny.
Guard Policies — Declarative rules that govern agent behavior before actions execute.
Drift Detection — Statistical behavioral drift analysis with critical alerts when agents deviate from baselines.
Ready to connect your own agent? Use the OpenAI Governed Agent Starter to see DashClaw in a real customer communication workflow.
# 1. Enter the starter directory
cd examples/openai-governed-agent
# 2. Install and run
npm install
cp .env.example .env
# Add your DASHCLAW_API_KEY to .env
node index.jsWhat it proves:
- Governance Before Execution:
claw.guard()checks policies before the action. - Permissioned Autonomy: Pausing for human approval (HITL) on high-risk actions.
- Verifiable Evidence: Intent, assumptions, and outcomes recorded in your dashboard.
Node.js:
npm install dashclawPython:
pip install dashclawNode.js:
import { DashClaw, GuardBlockedError, ApprovalDeniedError } from 'dashclaw';
const claw = new DashClaw({
baseUrl: process.env.DASHCLAW_BASE_URL, // or your DashClaw instance URL
apiKey: process.env.DASHCLAW_API_KEY,
agentId: 'my-agent'
});Python:
from dashclaw.client import DashClaw, GuardBlockedError, ApprovalDeniedError
import os
claw = DashClaw(
base_url=os.environ["DASHCLAW_BASE_URL"],
api_key=os.environ.get('DASHCLAW_API_KEY'),
agent_id='my-agent'
)The minimal governance loop wraps your agent's real-world actions:
// 1. Guard -> "Can I do X?"
const decision = await claw.guard({
action_type: 'database_query',
risk_score: 50
});
// 2. Record -> "I am attempting X."
const action = await claw.createAction({
action_type: 'database_query',
declared_goal: 'Extract user statistics'
});
// 3. Verify -> "I believe Y is true while doing X."
await claw.recordAssumption({
action_id: action.action_id,
assumption: 'The database is read-only for this credentials'
});
try {
// Execute the real action here...
// ...
// 4. Outcome -> "X finished with result Z."
await claw.updateOutcome(action.action_id, { status: 'completed' });
} catch (error) {
await claw.updateOutcome(action.action_id, { status: 'failed', error_message: error.message });
}Approve agent actions from the terminal without opening a browser. This is the primary interface for developers using Claude Code, Codex, Gemini CLI, or any terminal-first workflow.
npm install -g @dashclaw/clidashclaw approvals # interactive inbox for all pending actions
dashclaw approve <actionId> # approve a specific action
dashclaw deny <actionId> --reason "Outside change window"When an agent calls waitForApproval(), the SDK prints a structured block to stdout showing the action ID, policy name, risk score, declared goal, and a replay link. Approve from any terminal and the agent unblocks instantly via SSE. The browser dashboard reflects the same decision within one second.
Every governed action has a permanent replay URL:
<DASHCLAW_BASE_URL>/replay/<actionId>
DashClaw includes a standalone Python integration test agent that exercises the major DashClaw SDK methods directly against a running instance.
To run it locally:
export DASHCLAW_API_KEY="your-api-key"
export DASHCLAW_BASE_URL="http://localhost:3000"
# Run the full SDK test agent
python scripts/test-sdk-agent.py --fullSee the script comments for more flags and usage.
Govern Claude Code tool calls without any SDK instrumentation. Drop two Python scripts into .claude/hooks/ and every Bash, Edit, Write, and MultiEdit call Claude makes is governed by your DashClaw policies.
# Copy hooks into your project
cp path/to/DashClaw/hooks/dashclaw_pretool.py .claude/hooks/
cp path/to/DashClaw/hooks/dashclaw_posttool.py .claude/hooks/Merge the hooks block from hooks/settings.json into your .claude/settings.json, then set three environment variables:
export DASHCLAW_BASE_URL=https://your-dashclaw-instance.com
export DASHCLAW_API_KEY=your_api_key
export DASHCLAW_HOOK_MODE=enforce # or "observe" to log without blockingThe hooks require no pip installs and exit silently when DashClaw is unreachable. Claude Code is never blocked because your governance layer is down.
See hooks/README.md for the full installation guide and action type mapping.
The fastest path to self-host DashClaw is via Vercel + Neon.
- Fork this repo.
- Deploy to Vercel and connect a free Neon Postgres database.
- Run the interactive setup to configure secrets and run migrations:
node scripts/setup.mjs
- Your instance is live. Grab your API key from the dashboard and point your first agent at it.
For the complete API surface, check out the SDK Reference.

