A light-weight and straightforward system for spec-driven development with Claude Code
Getting Started • Why • How It Works • Documentation • Installation
curl -fsSL https://raw.githubusercontent.com/marconae/speq-skill/main/install.sh | bashNote
The installer builds speq from source using the Rust toolchain (installed automatically if missing). There is no binary distribution. See Installation for details.
Then run claude and type /speq:mission to start.
What does the installer do?
- Downloads the latest release source from GitHub
- Installs the Rust toolchain if missing (via rustup)
- Builds the
speqCLI from source - Installs the CLI to
~/.local/bin/speq - Installs plugin files to
~/.speq-skill/ - Registers the plugin with Claude Code
To uninstall, see Installation — Uninstall.
I want to leverage Claude Code as an effective tool to write software.
There are many other spec-driven development tools out there: OpenSpec, BMAD, SpecKit...
...but I was missing the following:
- A system that is not primped on one language or framework (e.g., Python or TypeScript)
- A straightforward repeatable workflow (
plan → implement → record) - A permanent and growing spec-library
- A system that keeps the specs small to avoid context cluttering
- A system that keeps asking me instead of making assumptions
- Semantic anchors that ground AI behavior in established methodologies
So I built speq-skill.
It combines skills with a simple CLI called speq that adds a semantical search layer to the permanent spec library. The search empowers the coding agent to find the right feature or scenarios during planning, but also during the implementation. This avoids reading unnecessary specs into the context window.
New to spec-driven development? Read "Spec-driven development: an introduction" and "Writing specs for AI coding agents" on my blog.
Each skill is grounded in semantic anchors — named references to established methodologies (like London School TDD, BLUF, ADR) that steer AI behavior toward well-documented practices.
Vibe Coding does not scale. speq-skill adds the missing workflow and guardrails.
If you want to describe what you want and have a coding agent generate the code for you, then you should give speq-skill a try!
It introduces a lightweight workflow for spec-driven development. It adds a CLI to enable the coding agent to search the permanent spec library.
/speq:mission → specs/mission.md (once per project)
│
┌────────────────┼────────────────┐
▼ ▼ ▼
/speq:plan → /speq:implement → /speq:record (repeat)
- Mission — Do it once. The coding agent explores your codebase (or interviews you for a greenfield project) and generates
specs/mission.md. - Plan — Describe what you want. The coding agent searches existing specs, asks clarifying questions, and creates a plan with spec deltas.
- Implement — The coding agent implements the plan, guided by guardrails for code quality, testing and more.
- Record — The coding agent merges implemented spec deltas into the permanent spec library.
Specs live in specs/<domain>/<feature>/spec.md. Plans stage in specs/_plans/<plan-name>/. The separation keeps your spec library clean while work is in progress.
| Guide | Description |
|---|---|
| Installation | Setup CLI and plugin |
| Workflow | One-time mission setup, then Plan → Implement → Record cycle |
| CLI Reference | All CLI commands |
| MCP Servers | Serena and Context7 |
| Semantic Anchors | Named methodologies grounding each skill |
speq-skill is a plugin for Claude Code and other compatible AI coding agents. This tool provides workflow structure and spec management only—the AI / coding agent (such as Claude Code) generates all code, specs, or other artifacts.
This plugin uses Serena and Context7 MCP servers. The installer sets them up as a convenience — they are standard open-source servers installed from their respective repositories. Their behavior, limitations, and conditions are governed by their own documentation. Context7's MCP server connects to a cloud service with a free tier — see Context7.
The speq CLI downloads the snowflake-arctic-embed-xs embeddings model (~23MB) on first run for semantic search.
Build with Rust 🦀 and made with ❤️ by marconae.