7.0 KiB
name, description, argument-hint, allowed-tools
| name | description | argument-hint | allowed-tools | |
|---|---|---|---|---|
| validate | Verify that an implementation plan was correctly executed by running each phase's success criteria against the working tree and producing a validation report. Use after the implement skill completes, when the user asks to "validate the plan", wants a post-implementation audit, or needs to confirm a feature is fully shipped per its plan. |
|
Read, Bash(git *), Bash(make *), Glob, Grep, Agent |
Validate Plan
You are tasked with validating that an implementation plan was correctly executed, verifying all success criteria and identifying any deviations or issues.
Initial Setup
When invoked:
-
Determine context - Are you in an existing conversation or starting fresh?
- If existing: Review what was implemented in this session
- If fresh: Need to discover what was done through git and codebase analysis
-
Locate the plan:
- If plan path provided, use it
- Otherwise, search recent commits for plan references or ask user
-
Gather implementation evidence:
# Check recent commits git log --oneline -n 20 git diff HEAD~N..HEAD # Where N covers implementation commits # Run comprehensive checks cd $(git rev-parse --show-toplevel) && make check test
Validation Process
Step 1: Context Discovery
If starting fresh or need more context:
If the injected git context shows "not a git repo":
- Skip git-based evidence gathering (git log, git diff)
- Validate via file inspection, automated test commands, and plan checklist
- Note in report: "Git history unavailable — validation based on file inspection only"
-
Read the implementation plan completely
-
Identify what should have changed:
- List all files that should be modified
- Note all success criteria (automated and manual)
- Identify key functionality to verify
-
Spawn parallel research agents to verify implementation:
Spawn the agents below in parallel using the Agent tool. Wait for ALL agents to complete before proceeding.
- general-purpose agent — Verify implementation details match plan specifications (analyzer role)
- general-purpose agent — Verify implementation follows established codebase patterns (pattern-finder role)
Example agent prompts:
- "Analyze {component} and verify it implements {plan requirement} correctly"
- "Find patterns similar to {new code} and check if conventions are followed"
Also gather evidence directly:
# Check recent commits git log --oneline -n 20 git diff HEAD~N..HEAD # Where N covers implementation commits # Run comprehensive checks cd $(git rev-parse --show-toplevel) && make check test
Step 2: Systematic Validation
For each phase in the plan:
-
Check completion status:
- Look for checkmarks in the plan (- [x])
- Verify the actual code matches claimed completion
-
Run automated verification:
- Execute each command from "Automated Verification"
- Document pass/fail status
- If failures, investigate root cause
-
Assess manual criteria:
- List what needs manual testing
- Provide clear steps for user verification
-
Think deeply about edge cases:
- Were error conditions handled?
- Are there missing validations?
- Could the implementation break existing functionality?
Step 3: Generate Validation Report
Create comprehensive validation summary:
## Validation Report: {Plan Name}
### Implementation Status
✓ Phase 1: {Name} - Fully implemented
✓ Phase 2: {Name} - Fully implemented
⚠️ Phase 3: {Name} - Partially implemented (see issues)
### Automated Verification Results
✓ Build passes: `make build`
✓ Tests pass: `make test`
✗ Linting issues: `make lint` (3 warnings)
### Code Review Findings
#### Matches Plan:
- Database migration correctly adds {table}
- API endpoints implement specified methods
- Error handling follows plan
#### Deviations from Plan:
- Used different variable names in {file:line}
- Added extra validation in {file:line} (improvement)
#### Potential Issues:
- Missing index on foreign key could impact performance
- No rollback handling in migration
### Manual Testing Required:
1. UI functionality:
- [ ] Verify {feature} appears correctly
- [ ] Test error states with invalid input
2. Integration:
- [ ] Confirm works with existing {component}
- [ ] Check performance with large datasets
### Recommendations:
- Address linting warnings before merge
- Consider adding integration test for {scenario}
- Document new API endpoints
---
💬 Follow-up: if findings are localized, fix them and re-run `/skill:validate`. If findings imply plan-level changes, escalate to `/skill:revise <plan-path>` first.
**Next step:** `/skill:commit` — group the validated changes into atomic commits (skip if status is `needs_changes` — fix the gaps first, then re-run `/skill:validate`).
> 🆕 Tip: start a fresh session with `/new` first — chained skills work best with a clean context window.
Handle Follow-ups
- Validate does not edit code or plans. It produces a report. Fixes happen in implement; plan revisions happen in revise.
- Localized gaps. If findings are small and localized, fix them in-place and re-run
/skill:validatefor a fresh report. - Plan-level gaps. If findings imply the plan itself is wrong (missing phases, wrong approach, untestable success criteria), escalate to
/skill:revise <plan-path>first, then re-implement, then re-validate. - No append mode. Each validation run produces a fresh report — there is no
## Follow-upappend. The previous block'sNext step:stays valid only when status iscomplete.
Working with Existing Context
If you were part of the implementation:
- Review the conversation history
- Check your todo list for what was completed
- Focus validation on work done in this session
- Be honest about any shortcuts or incomplete items
Important Guidelines
- Be thorough but practical - Focus on what matters
- Run all automated checks - Don't skip verification commands
- Document everything - Both successes and issues
- Think critically - Question if the implementation truly solves the problem
- Consider maintenance - Will this be maintainable long-term?
Validation Checklist
Always verify:
- All phases marked complete are actually done
- Automated tests pass
- Code follows existing patterns
- No regressions introduced
- Error handling is robust
- Documentation updated if needed
- Manual test steps are clear
Relationship to Other Skills
Recommended workflow:
/skill:implement- Execute the implementation/skill:commit- Create atomic commits for changes/skill:validate- Verify implementation correctness
The validation works best after commits are made, as it can analyze the git history to understand what was implemented.
Remember: Good validation catches issues before they reach production. Be constructive but thorough in identifying gaps or improvements.