DEPRECATED — This repository has been consolidated into mdo-nexus-ooda. No further updates here.
Orchestrate multi-model AI agent teams. Observe every decision in real time.
Quick Start • Architecture • API • Releases
"Humans steer. Agents execute."
Nexus gives you a single control surface for multi-model AI agent teams. Define a team of specialist models, assign each a role, then launch tasks and watch execution unfold in real time through WebSocket-driven dashboards.
One endpoint provisions a battle-tested five-agent team. One command deploys the entire stack.
| Compose Build teams with role-specific models |
Execute Parallel & sequential pipelines |
Observe Live WebSocket execution feed |
Analyze Tokens, cost, latency per agent |
graph TB
subgraph CLIENT ["CLIENT"]
direction LR
D["Dashboard"]
TB["TeamBuilder"]
EM["ExecutionMonitor"]
RV["ResultsViewer"]
end
subgraph API ["FASTAPI :8800"]
direction LR
REST["/api/*"]
WS["/ws"]
ORCH["Orchestrator"]
end
subgraph DATA ["PERSISTENCE"]
direction LR
DB[("SQLite\nWAL mode")]
LLM["LiteLLM\n:4000"]
end
CLIENT -- "REST + WebSocket" --> API
ORCH --> DB
ORCH --> LLM
style CLIENT fill:#1e1e2e,stroke:#cba6f7,color:#cdd6f4
style API fill:#1e1e2e,stroke:#89b4fa,color:#cdd6f4
style DATA fill:#1e1e2e,stroke:#a6e3a1,color:#cdd6f4
erDiagram
TEAM ||--o{ AGENT : contains
TEAM ||--o{ TASK : receives
TASK ||--o{ STEP : "broken into"
TASK ||--o{ EXECUTION : triggers
EXECUTION ||--o{ STEP_RESULT : produces
TEAM {
string id PK
string name
string strategy
}
AGENT {
string id PK
string role
string model
}
TASK {
string id PK
string prompt
string mode
}
EXECUTION {
string id PK
string status
float total_cost
}
A single POST /api/teams/preset/harness provisions the reference team:
| Role | Model | Responsibility |
|---|---|---|
| Orchestrator | claude-opus-4-6 |
Decomposition, delegation, judgment |
| Backend | gpt-5.3-codex |
Logic, review, refactoring |
| Actor | claude-sonnet-4-6 |
Primary code generation |
| Security | qwen3-coder |
Vulnerability analysis |
| Designer | gemini-3.1-pro |
UI evaluation, visual judgment |
docker compose up -dFrontend on :3000 · Backend on :8800
# backend
cd backend && pip install -e ".[dev]"
uvicorn src.main:app --port 8800 --reload
# frontend
cd frontend && npm install && npm run devOpen http://localhost:5173 — API proxied automatically.
Optional — connect a LiteLLM proxy at
localhost:4000for live AI execution.
| Method | Endpoint | Purpose |
|---|---|---|
GET |
/api/health |
Liveness + LiteLLM status |
GET |
/api/dashboard |
Aggregate statistics |
GET |
/api/models |
Available models via LiteLLM |
POST |
/api/teams |
Create team |
POST |
/api/teams/preset/harness |
Provision preset team |
GET |
/api/teams |
List teams |
POST |
/api/tasks |
Create task with steps |
POST |
/api/tasks/{id}/execute |
Execute task |
GET |
/api/executions |
List executions |
WS |
/ws |
Real-time execution stream |
cd backend && pytest tests/ -v # 17 tests · ~2.6s