Composable agent runtime with enforced isolation boundaries
Design principle: Skills are declared capabilities.
Capabilities only exist when bound to an isolated execution boundary.
VoidBox = Agent(Skills) + Isolation
Architecture · Quick Start · Observability
Local-first. Cloud-ready. Runs on any Linux host with /dev/kvm.
Status: v0 (early release). Production-ready architecture; APIs are still stabilizing.
- Isolated execution — Each stage runs inside its own micro-VM boundary (not shared-process containers).
- Policy-enforced runtime — Command allowlists, resource limits, seccomp-BPF, and controlled network egress.
- Skill-native model — MCP servers, SKILL files, and CLI tools mounted as declared capabilities.
- Composable pipelines — Sequential
.pipe(), parallel.fan_out(), with explicit stage-level failure domains. - Claude Code native runtime — Each stage runs
claude-code, backed by Claude (default) or Ollama via Claude-compatible provider mode. - Observability native — OTLP traces, metrics, structured logs, and stage-level telemetry emitted by design.
- No root required — Usermode SLIRP networking via smoltcp (no TAP devices).
Isolation is the primitive. Pipelines are compositions of bounded execution environments.
Containers share a host kernel.
For general application isolation, this is often sufficient. For AI agents executing tools, code, and external integrations, it creates shared failure domains.
In a shared-process model:
- Tool execution and agent runtime share the same kernel.
- Escape surfaces are reduced, but not eliminated.
- Resource isolation depends on cgroups and cooperative enforcement.
VoidBox binds each agent stage to its own micro-VM boundary.
Isolation is enforced by hardware virtualization — not advisory process controls.
cargo add void-boxuse void_box::agent_box::VoidBox;
use void_box::skill::Skill;
use void_box::llm::LlmProvider;
// Skills = declared capabilities
let hn_api = Skill::file("skills/hackernews-api.md")
.description("HN API via curl + jq");
let reasoning = Skill::agent("claude-code")
.description("Autonomous reasoning and code execution");
// VoidBox = Agent(Skills) + Isolation
let researcher = VoidBox::new("hn_researcher")
.skill(hn_api)
.skill(reasoning)
.llm(LlmProvider::ollama("qwen3-coder")) // claude-code runtime using Ollama backend
.memory_mb(1024)
.network(true)
.prompt("Analyze top HN stories for AI engineering trends")
.build()?;# hackernews_agent.yaml
api_version: v1
kind: agent
name: hn_researcher
sandbox:
mode: auto
memory_mb: 1024
network: true
llm:
provider: ollama
model: qwen3-coder
agent:
prompt: "Analyze top HN stories for AI engineering trends"
skills:
- "file:skills/hackernews-api.md"
- "agent:claude-code"
timeout_secs: 600// Rust API
let result = researcher.run(None).await?;
println!("{}", result.claude_result.result_text);# Or via CLI with a YAML spec
voidbox run --file hackernews_agent.yaml┌──────────────────────────────────────────────┐
│ Host │
│ VoidBox Engine / Pipeline Orchestrator │
│ │
│ ┌─────────────────────────────────────┐ │
│ │ VMM (KVM) │ │
│ │ vsock ←→ guest-agent (PID 1) │ │
│ │ SLIRP ←→ eth0 (10.0.2.15) │ │
│ └─────────────────────────────────────┘ │
│ │
│ Seccomp-BPF │ OTLP export │
└──────────────┼───────────────────────────────┘
Hardware │ Isolation
═══════════════╪════════════════════════════════
│
┌──────────────▼──────────────────────────────────────┐
│ Guest VM (Linux) │
│ guest-agent: auth, allowlist, rlimits │
│ claude-code runtime (Claude API or Ollama backend) │
│ skills provisioned into isolated runtime │
└─────────────────────────────────────────────────────┘
See docs/architecture.md for the full component diagram, wire protocol, and security model.
Every pipeline run is fully instrumented out of the box. Each VM stage emits spans and metrics via OTLP, giving you end-to-end visibility across isolated execution boundaries — from pipeline orchestration down to individual tool calls inside each micro-VM.
- OTLP traces — Per-box spans, tool call events, pipeline-level trace
- Metrics — Token counts, cost, duration per stage
- Structured logs —
[vm:NAME]prefixed, trace-correlated - Guest telemetry — procfs metrics (CPU, memory) exported to host via vsock
Enable with --features opentelemetry and set VOIDBOX_OTLP_ENDPOINT.
See the playground for a ready-to-run stack with Grafana, Tempo, and Prometheus.
cargo run --example quick_demo
cargo run --example trading_pipeline
cargo run --example parallel_pipeline# Build guest initramfs (includes claude-code binary, busybox, CA certs)
scripts/build_claude_rootfs.sh
# Run with Claude API
ANTHROPIC_API_KEY=sk-ant-xxx \
VOID_BOX_KERNEL=/boot/vmlinuz-$(uname -r) \
VOID_BOX_INITRAMFS=target/void-box-rootfs.cpio.gz \
cargo run --example trading_pipeline
# Or with Ollama
OLLAMA_MODEL=qwen3-coder \
VOID_BOX_KERNEL=/boot/vmlinuz-$(uname -r) \
VOID_BOX_INITRAMFS=target/void-box-rootfs.cpio.gz \
cargo run --example trading_pipelineOLLAMA_MODEL=phi4-mini \
OLLAMA_MODEL_QUANT=qwen3-coder \
OLLAMA_MODEL_SENTIMENT=phi4-mini \
VOID_BOX_KERNEL=/boot/vmlinuz-$(uname -r) \
VOID_BOX_INITRAMFS=target/void-box-rootfs.cpio.gz \
cargo run --example parallel_pipelinecargo test --lib # Unit tests
cargo test --test skill_pipeline # Integration tests (mock)
cargo test --test integration # Integration tests
# E2E (requires KVM + test initramfs)
scripts/build_test_image.sh
VOID_BOX_KERNEL=/boot/vmlinuz-$(uname -r) \
VOID_BOX_INITRAMFS=/tmp/void-box-test-rootfs.cpio.gz \
cargo test --test e2e_skill_pipeline -- --ignored --test-threads=1VoidBox is evolving toward a durable, capability-bound execution platform.
- Session persistence — Durable run/session state with pluggable backends (filesystem, SQLite, Valkey).
- Terminal-native interactive experience — Panel-based, live-streaming interface powered by the event API.
- Persistent block devices (virtio-blk) — Stateful workloads across VM restarts.
- aarch64 support — Native ARM64 builds with release pipeline cross-compilation.
- Codex-style backend support — Optional execution backend for code-first workflows.
- Language bindings — Python and Node.js SDKs for daemon-level integration.
Apache-2.0 · The Void Platform

