Every time you use an AI coding agent, it starts from zero. You spend an hour debugging some obscure error, the agent figures it out, session ends. Next time you hit the same issue? Another hour.
This skill fixes that. When Droids discover something non-obvious (a debugging technique, a workaround, some project-specific pattern), it saves that knowledge as a new skill. Next time a similar problem comes up, the skill gets loaded automatically.
User-level (recommended)
git clone https://github.com/blader/Claudeception.git ~/.factory/skills/claudeceptionProject-level
git clone https://github.com/blader/Claudeception.git .factory/skills/claudeceptionAfter cloning, restart the Droid CLI so it picks up the new skill. The skill will automatically activate when Droids encounter tasks involving debugging, workarounds, or non-obvious solutions.
The skill activates automatically when Droids:
- Just completed debugging and discovered a non-obvious solution
- Found a workaround through investigation or trial-and-error
- Resolved an error where the root cause wasn't immediately apparent
- Learned project-specific patterns or configurations through investigation
- Completed any task where the solution required meaningful discovery
Request skill extraction directly:
Save what we just learned as a skill
Or ask to review what was learned:
What did we learn from this session?
Not every task produces a skill. It only extracts knowledge that required actual discovery (not just reading docs), will help with future tasks, has clear trigger conditions, and has been verified to work.
The idea comes from academic work on skill libraries for AI agents.
Voyager (Wang et al., 2023) showed that game-playing agents can build up libraries of reusable skills over time, and that this helps them avoid re-learning things they already figured out. CASCADE (2024) introduced "meta-skills" (skills for acquiring skills), which is what this is. SEAgent (2025) showed agents can learn new software environments through trial and error, which inspired the retrospective feature. Reflexion (Shinn et al., 2023) showed that self-reflection helps.
Agents that persist what they learn do better than agents that start fresh.
Droids have a native skills system. At startup, they load skill names and descriptions (about 100 tokens each). When you're working, they match your current context against those descriptions and pull in relevant skills.
But this retrieval system can be written to, not just read from. So when this skill notices extractable knowledge, it writes a new skill with a description optimized for future retrieval.
The description matters a lot. "Helps with database problems" won't match anything useful. "Fix for PrismaClientKnownRequestError in serverless" will match when someone hits that error.
More on the skills architecture in the Factory Skills Documentation.
Extracted skills are markdown files with YAML frontmatter:
---
name: prisma-connection-pool-exhaustion
description: |
Fix for PrismaClientKnownRequestError: Too many database connections
in serverless environments (Vercel, AWS Lambda). Use when connection
count errors appear after ~5 concurrent requests.
allowed-tools:
- Read
- Edit
- Create
- Execute
---
# Prisma Connection Pool Exhaustion
## Problem
[What this skill solves]
## Context / Trigger Conditions
[Exact error messages, symptoms, scenarios]
## Solution
[Step-by-step fix]
## Verification
[How to confirm it worked]See resources/skill-template.md for the full template.
The skill is picky about what it extracts. If something is just a documentation lookup, or only useful for this one case, or hasn't actually been tested, it won't create a skill. Would this actually help someone who hits this problem in six months? If not, no skill.
See examples/ for sample skills:
nextjs-server-side-error-debugging/: errors that don't show in browser consoleprisma-connection-pool-exhaustion/: the "too many connections" serverless problemtypescript-circular-dependency/: detecting and fixing import cycles
Contributions welcome. Fork, make changes, submit a PR.
MIT