Install:
npx skills add diskd-ai/code-review| skills.sh
Structured code review skill that produces high-signal, actionable findings with severity labels, exact file+line citations, and a final verdict. Supports both PR/diff reviews and full codebase reviews.
This skill provides a repeatable review workflow covering:
- Design and architecture assessment
- Functionality and correctness verification
- Complexity and over-engineering detection
- Test coverage and quality evaluation
- Naming, comments, style, and documentation checks
- Security vulnerability scanning
- Performance and reliability analysis
Triggers:
- Asked to review code, a PR, a diff, a CL, or a codebase
- Asked to audit code quality, perform a security review, or check architecture
- Mentions of "code review," "review this," or "give feedback on this code"
Use cases:
- Review a pull request or diff before merge
- Audit an entire codebase or specific module for quality
- Focus on a specific concern (security, performance, design)
- Produce a standardized review report with prioritized findings
- Establish what to review (checkout, diff access)
- Scope the change (
git diff --name-only,diff_changed_ranges.py) - Broad assessment -- does the change make sense as a whole?
- Critical components -- review largest logical changes first
- Systematic review -- remaining files against the checklist
- Verdict -- approve, request changes, or comment only
- Establish scope -- entire repo, specific module, or specific concern
- Understand the architecture -- structure, layers, patterns
- Systematic review -- module by module against the checklist
- Summary -- overall code health with prioritized findings
| Area | Key Questions |
|---|---|
| Design | Right architecture? Proper separation of concerns? |
| Functionality | Does what's intended? Edge cases handled? |
| Complexity | Simplest correct solution? Over-engineered? |
| Tests | Present, correct, will fail when code breaks? |
| Naming | Descriptive, follows conventions? |
| Comments | Explain why, not what? |
| Style | Follows style guide? Consistent? |
| Documentation | Updated if behavior changed? |
| Security | Input validation, auth, injection, secrets? |
| Performance | N+1, unbounded loops, resource leaks, races? |
| Label | Meaning |
|---|---|
| Blocker | Must be fixed before approval |
| High | Strongly recommended to fix |
| Medium | Worth addressing |
| Nit | Minor style or preference, not blocking |
| Optional | Suggestion the author can take or leave |
| FYI | Educational context, no action required |
code-review/
SKILL.md # Entry point (workflow + checklist)
README.md # This file (overview)
references/
review-standard.md # When to approve vs request changes
what-to-look-for.md # Full review checklist with details
review-comments.md # How to write effective comments
scripts/
diff_changed_ranges.py # Parse git diff into changed line ranges
print_lines.py # Print exact line ranges for citations
assets/
review_template.md # Standardized review report template
Every review produces a standardized report (see assets/review_template.md):
- Metadata -- scope, date, review mode
- Summary -- 1-3 sentence overview
- Findings -- tables grouped by severity, each with area,
file#Lx-Lycitation, description, and suggested fix - Positive Observations -- acknowledge good code and patterns
- Checklist Summary -- pass / needs work / N/A per area
- Verdict -- approve / request changes / comment only, justification, confidence score
- Full skill reference: SKILL.md
- Review standard: references/review-standard.md
- What to look for: references/what-to-look-for.md
- Comment best practices: references/review-comments.md
- Report template: assets/review_template.md
MIT