Canonical reference for the DashClaw SDK (v2.1.5). Node.js and Python parity across all core governance features.
npm install dashclaw
import { DashClaw } from 'dashclaw';
const claw = new DashClaw({
baseUrl: 'https://dashclaw.io',
apiKey: process.env.DASHCLAW_API_KEY,
agentId: 'my-agent'
});// 1. Ask permission
const result = await claw.guard({
action_type: 'deploy',
risk_score: 85,
declared_goal: 'Update auth service to v2.1.1'
});
if (result.decision === 'block') {
throw new Error(`Blocked: ${result.reasons.join(', ')}`);
}
// 2. Log intent
const { action_id } = await claw.createAction({
action_type: 'deploy',
declared_goal: 'Update auth service to v2.1.1'
});
try {
// 3. Log evidence
await claw.recordAssumption({
action_id,
assumption: 'Tests passed'
});
// ... deploy ...
// 4. Record outcome
await claw.updateOutcome(action_id, { status: 'completed' });
} catch (err) {
await claw.updateOutcome(action_id, { status: 'failed', error_message: err.message });
}const claw = new DashClaw({ baseUrl, apiKey, agentId });| Parameter | Type | Required | Description |
|---|---|---|---|
| baseUrl / base_url | string | Yes | Dashboard URL |
| apiKey / api_key | string | Yes | API Key |
| agentId / agent_id | string | Yes | Unique Agent ID |
Evaluate guard policies for a proposed action. Call this before risky operations.
| Parameter | Type | Required | Description |
|---|---|---|---|
| action_type | string | Yes | Proposed action type |
| risk_score | number | No | 0-100 |
Returns: Promise<{ decision: string, reasons: string[], risk_score: number, agent_risk_score: number | null }>
const result = await claw.guard({ action_type: 'deploy', risk_score: 85 });Create a new action record.
Returns: Promise<{ action_id: string }>
const { action_id } = await claw.createAction({ action_type: 'deploy' });Poll for human approval.
await claw.waitForApproval(action_id);
Log final results.
await claw.updateOutcome(action_id, { status: 'completed' });Track agent beliefs.
await claw.recordAssumption({ action_id, assumption: '...' });Get current risk signals across all agents.
Returns: Promise<{ signals: Object[] }>
const { signals } = await claw.getSignals();Map your entire fleet as a neural web, highlighting high-risk nodes and message flows.
Report agent presence and health to the control plane.
| Parameter | Type | Required | Description |
|---|---|---|---|
| status | string | No | Agent status (online, busy, offline) |
| metadata | object | No | Additional agent state |
await claw.heartbeat('online', { task: 'monitoring' });Report active provider connections and their status.
| Parameter | Type | Required | Description |
|---|---|---|---|
| connections | Array<Object> | Yes | List of provider connection objects |
await claw.reportConnections([
{ provider: 'openai', status: 'active', plan_name: 'enterprise' }
]);Register an unresolved dependency for a decision. Open loops track work that must be completed before the decision is fully resolved.
| Parameter | Type | Required | Description |
|---|---|---|---|
| action_id | string | Yes | Associated action |
| loop_type | string | Yes | The category of the loop |
| description | string | Yes | What needs to be resolved |
await claw.registerOpenLoop(action_id, 'validation', 'Waiting for PR review');
Resolve a pending loop.
await claw.resolveOpenLoop(loop_id, 'completed', 'Approved');
Record what the agent believed to be true when making a decision.
await claw.recordAssumption({ action_id, assumption: 'User is authenticated' });Compute learning velocity (rate of score improvement) for agents.
Returns: Promise<{ velocity: Array<Object> }>
const { velocity } = await claw.getLearningVelocity();Compute learning curves per action type to measure efficiency gains.
const curves = await claw.getLearningCurves();
Fetch rendered prompt from DashClaw.
const { rendered } = await claw.renderPrompt({
template_id: 'marketing',
variables: { company: 'Apple' }
});Create a reusable scorer definition for automated evaluation.
| Parameter | Type | Required | Description |
|---|---|---|---|
| name | string | Yes | Scorer name |
| scorer_type | string | Yes | Type (llm_judge, regex, range) |
| config | object | No | Scorer configuration |
await claw.createScorer('toxicity', 'regex', { pattern: 'bad-word' });Define weighted quality scoring profiles across multiple scorers.
await claw.createScoringProfile({
name: 'prod-quality',
dimensions: [{ scorer: 'toxicity', weight: 0.5 }]
});Map active policies to a compliance framework's controls.
await claw.mapCompliance('SOC2');Generate a compliance proof report from active policies.
const report = await claw.getProofReport('json');Query the immutable audit trail of all workspace changes and administrative events.
const logs = await claw.getActivityLogs({ limit: 10 });Register an HMAC-signed webhook for real-time exfiltration of governance events.
await claw.createWebhook('https://api.myapp.com/hooks', ['action.blocked']);{ message: "Validation failed", status: 400 }