Add plannotator extension v0.19.10
This commit is contained in:
@@ -0,0 +1,89 @@
|
||||
---
|
||||
name: plannotator-setup-goal
|
||||
description: Create reviewed Codex goal setup packages for long-running /goal work. Use when the user wants to turn an idea, backlog, project mission, or vague objective into durable goal files under a project goals slug folder, with Plannotator review gates for brief, narrative plan with acceptance criteria, verification, blockers, and the final /goal prompt.
|
||||
---
|
||||
|
||||
# Plannotator Setup Goal
|
||||
|
||||
## Overview
|
||||
|
||||
Create a durable goal package in the current project at `goals/<slug>/` so Codex `/goal` has a clear mission, guardrails, proof of done, and external memory. Use Plannotator as the user review UI: every critical document must be gated with `plannotator annotate <document.md> --gate` and revised until approved.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Confirm the working directory is the project root, or use the user-provided project directory.
|
||||
2. Gather enough context to name the goal, define the intended outcome, identify constraints, find likely project docs, and determine proof of done.
|
||||
3. Ask focused questions whenever the goal is vague, risky, too broad, missing a finish line, or missing verification. Do not proceed with guessed critical requirements.
|
||||
4. Create a slug from the goal name and scaffold `goals/<slug>/` with:
|
||||
|
||||
```bash
|
||||
python3 <skill_dir>/scripts/scaffold_goal.py --root . --slug <slug> --title "<goal title>" --objective "<one sentence outcome>"
|
||||
```
|
||||
|
||||
5. Draft and refine the critical documents in this order:
|
||||
- `brief.md`
|
||||
- `plan.md`
|
||||
- `verification.md`
|
||||
- `blockers.md`
|
||||
- `goal-prompt.md`
|
||||
6. Gate each critical document with Plannotator before moving on:
|
||||
|
||||
```bash
|
||||
plannotator annotate goals/<slug>/<document.md> --gate
|
||||
```
|
||||
|
||||
7. If Plannotator returns denial, comments, or markup, treat that as user feedback. Revise the document, then run the same gate again. Continue until approved.
|
||||
8. After all gates pass, present the final path and the exact `/goal` prompt from `goal-prompt.md`.
|
||||
|
||||
## Document Standards
|
||||
|
||||
`brief.md` must state the mission, context, constraints, non-goals, ask-before rules, and concise done condition.
|
||||
|
||||
`plan.md` is the central reviewed planning artifact. It must read like a clear solution narrative, not just a technical checklist. Include what is being built, why this approach is appropriate, how the solution will work, the main implementation slices, risks, phase boundaries, and acceptance criteria. Every important acceptance item needs observable evidence. For large missions, prefer several sequential goals over one endless goal.
|
||||
|
||||
`verification.md` must list exact verification commands and manual checks. Include expected pass conditions and where evidence should be recorded.
|
||||
|
||||
`blockers.md` must capture open questions, user-decision points, dangerous operations that require approval, and conditions that should pause the goal.
|
||||
|
||||
`goal-prompt.md` must contain the final command the user can paste into Codex. It should reference the goal package files as the durable source of truth, tell Codex to append evidence to `progress.jsonl`, and define when to stop or ask.
|
||||
|
||||
`progress.jsonl` is append-only evidence. Do not gate it. During execution, append concrete progress and proof, not summaries of intent.
|
||||
|
||||
## Plannotator Rules
|
||||
|
||||
Use Plannotator as the review surface, not as a passive preview. The command `plannotator annotate <document.md> --gate` presents the document to the user and captures approval or denial feedback.
|
||||
|
||||
Do not skip gates for critical documents. Do not mark a document ready because it seems reasonable. The user must approve it through the gate.
|
||||
|
||||
If a document is denied, update the document from the captured feedback and rerun the gate. Keep the loop tight: one document, one review, one revision cycle.
|
||||
|
||||
## Goal Prompt Rules
|
||||
|
||||
Write the final `/goal` prompt as a compact product brief, not a raw todo dump.
|
||||
|
||||
Include:
|
||||
- outcome
|
||||
- relevant files
|
||||
- constraints and non-goals
|
||||
- plan acceptance criteria and evidence
|
||||
- verification commands
|
||||
- ask-before rules
|
||||
- instruction to use `goals/<slug>/` as the durable plan and append evidence to `progress.jsonl`
|
||||
|
||||
Avoid:
|
||||
- open-ended improvement loops
|
||||
- mixed unrelated missions
|
||||
- vague words like "improve" without measurable proof
|
||||
- instructions to keep working forever
|
||||
- hidden assumptions that are not written into the files
|
||||
|
||||
## Quality Checks
|
||||
|
||||
Before finalizing, verify:
|
||||
- The goal has one clear finish line.
|
||||
- The plan explains what, why, and how before listing work slices.
|
||||
- The plan acceptance criteria can be audited from real artifacts.
|
||||
- Verification commands are concrete.
|
||||
- Risky actions have ask-before rules.
|
||||
- The final `/goal` prompt tells Codex where the goal files live.
|
||||
- All critical documents have passed Plannotator gates.
|
||||
@@ -0,0 +1,4 @@
|
||||
interface:
|
||||
display_name: "Plannotator Goal Setup"
|
||||
short_description: "Build reviewed Codex goal packages"
|
||||
default_prompt: "Use $plannotator-setup-goal to create a reviewed goal package for this project."
|
||||
219
extensions/plannotator/skills/plannotator-setup-goal/scripts/scaffold_goal.py
Executable file
219
extensions/plannotator/skills/plannotator-setup-goal/scripts/scaffold_goal.py
Executable file
@@ -0,0 +1,219 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Scaffold a reviewed Codex goal package under goals/<slug>/."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import datetime as dt
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def slugify(value: str) -> str:
|
||||
slug = re.sub(r"[^a-z0-9]+", "-", value.strip().lower()).strip("-")
|
||||
slug = re.sub(r"-{2,}", "-", slug)
|
||||
return slug or "goal"
|
||||
|
||||
|
||||
def write_file(path: Path, content: str, force: bool) -> None:
|
||||
if path.exists() and not force:
|
||||
return
|
||||
path.write_text(content, encoding="utf-8")
|
||||
|
||||
|
||||
def brief(title: str, objective: str) -> str:
|
||||
return f"""# {title}
|
||||
|
||||
## Outcome
|
||||
|
||||
{objective or "TODO: State the concrete outcome in one or two sentences."}
|
||||
|
||||
## Context
|
||||
|
||||
- TODO: List the project facts, files, docs, user needs, and constraints Codex must know.
|
||||
|
||||
## Constraints
|
||||
|
||||
- TODO: List behavior, APIs, data, UX, performance, compatibility, or process rules that must not regress.
|
||||
|
||||
## Non-Goals
|
||||
|
||||
- TODO: List work that is out of scope for this goal.
|
||||
|
||||
## Ask Before
|
||||
|
||||
- TODO: List decisions, risky operations, external dependencies, product calls, and destructive changes that require user approval.
|
||||
|
||||
## Done Means
|
||||
|
||||
- TODO: Summarize the finish line. Detailed acceptance evidence belongs in `acceptance.md`.
|
||||
"""
|
||||
|
||||
|
||||
def plan(title: str) -> str:
|
||||
return f"""# Plan: {title}
|
||||
|
||||
## Solution Overview
|
||||
|
||||
TODO: Describe what is being built in plain language. Explain the shape of the solution before diving into tasks.
|
||||
|
||||
## Why This Approach
|
||||
|
||||
TODO: Explain why this direction is appropriate for the project, user goal, constraints, and risk level.
|
||||
|
||||
## How It Will Work
|
||||
|
||||
TODO: Describe the main moving parts, data flow, user flow, files, APIs, or systems involved. Keep this narrative enough that a reviewer can understand the intended solution.
|
||||
|
||||
## Slices
|
||||
|
||||
| Slice | Purpose | Main files or systems | Done when | Risks |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| 1 | TODO | TODO | TODO | TODO |
|
||||
| 2 | TODO | TODO | TODO | TODO |
|
||||
|
||||
## Sequencing
|
||||
|
||||
- TODO: Explain the order of execution and which slices block later slices.
|
||||
|
||||
## Phase Boundaries
|
||||
|
||||
- TODO: State when this goal should end and a new goal should be created instead of stretching this one.
|
||||
|
||||
## Steering Notes
|
||||
|
||||
- TODO: Capture taste calls, product preferences, or review checkpoints the user should steer during execution.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] TODO: Requirement with concrete observable evidence.
|
||||
- [ ] TODO: Requirement with concrete observable evidence.
|
||||
|
||||
## Required Evidence
|
||||
|
||||
| Requirement | Evidence to inspect | Where evidence is recorded |
|
||||
| --- | --- | --- |
|
||||
| TODO | TODO | TODO |
|
||||
|
||||
## Completion Audit
|
||||
|
||||
Before marking the goal complete, Codex must map every explicit requirement, file, command, check, and deliverable to real evidence. If any item is missing, incomplete, weakly verified, or uncertain, the goal is not complete.
|
||||
"""
|
||||
|
||||
|
||||
def verification(title: str) -> str:
|
||||
return f"""# Verification: {title}
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Purpose | Expected pass condition | Evidence location |
|
||||
| --- | --- | --- | --- |
|
||||
| TODO | TODO | TODO | TODO |
|
||||
|
||||
## Manual Checks
|
||||
|
||||
- TODO: Add browser checks, screenshots, release checks, PR checks, or human review steps.
|
||||
|
||||
## Evidence Rules
|
||||
|
||||
- Record verification results in `progress.jsonl`.
|
||||
- Include command, status, timestamp, and artifact path when available.
|
||||
- Do not rely on passing tests unless they cover the requirement being claimed.
|
||||
"""
|
||||
|
||||
|
||||
def blockers(title: str) -> str:
|
||||
return f"""# Blockers: {title}
|
||||
|
||||
## Open Questions
|
||||
|
||||
- TODO: Questions that must be answered before or during execution.
|
||||
|
||||
## Stop And Ask
|
||||
|
||||
- TODO: Conditions that should pause the goal and ask the user.
|
||||
|
||||
## Dangerous Or High-Risk Actions
|
||||
|
||||
- TODO: Destructive changes, migrations, dependency changes, security-sensitive work, billing/auth changes, or external operations requiring approval.
|
||||
|
||||
## Known Blockers
|
||||
|
||||
- TODO: Current blockers, owners, and next action.
|
||||
"""
|
||||
|
||||
|
||||
def goal_prompt(slug: str, title: str, objective: str) -> str:
|
||||
prompt_objective = objective or f"Complete the reviewed goal package for {title}."
|
||||
return f"""# Codex Goal Prompt: {title}
|
||||
|
||||
After every critical document in this folder is approved with Plannotator, paste or set this goal:
|
||||
|
||||
```text
|
||||
/goal {prompt_objective}
|
||||
|
||||
Use `goals/{slug}/` as the durable source of truth:
|
||||
- Read `brief.md` for the mission, context, constraints, non-goals, and ask-before rules.
|
||||
- Follow `plan.md` for the solution overview, implementation slices, risks, and acceptance criteria.
|
||||
- Run the checks in `verification.md` and record evidence.
|
||||
- Append concrete progress and proof to `progress.jsonl`.
|
||||
- Pause and ask the user for anything listed in `blockers.md` or any similarly risky unresolved decision.
|
||||
|
||||
Do not mark the goal complete until every acceptance item is backed by real evidence and the required verification has passed or the remaining blocker is explicitly documented for the user.
|
||||
```
|
||||
"""
|
||||
|
||||
|
||||
def progress_entry(title: str, objective: str) -> str:
|
||||
now = dt.datetime.now(dt.timezone.utc).replace(microsecond=0).isoformat()
|
||||
entry = {
|
||||
"type": "goal_package_created",
|
||||
"timestamp": now,
|
||||
"title": title,
|
||||
"objective": objective,
|
||||
"evidence": "Initial scaffold created; critical documents still require Plannotator gate approval.",
|
||||
}
|
||||
return json.dumps(entry, ensure_ascii=True) + "\n"
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
parser.add_argument("--root", default=".", help="Project root where goals/ should be created.")
|
||||
parser.add_argument("--slug", help="Goal folder name. Defaults to a slugified title.")
|
||||
parser.add_argument("--title", required=True, help="Human-readable goal title.")
|
||||
parser.add_argument("--objective", default="", help="One-sentence goal outcome.")
|
||||
parser.add_argument("--force", action="store_true", help="Overwrite existing scaffold files.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = parse_args()
|
||||
root = Path(args.root).resolve()
|
||||
slug = slugify(args.slug or args.title)
|
||||
goal_dir = root / "goals" / slug
|
||||
goal_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
files = {
|
||||
"brief.md": brief(args.title, args.objective),
|
||||
"plan.md": plan(args.title),
|
||||
"verification.md": verification(args.title),
|
||||
"blockers.md": blockers(args.title),
|
||||
"goal-prompt.md": goal_prompt(slug, args.title, args.objective),
|
||||
}
|
||||
for name, content in files.items():
|
||||
write_file(goal_dir / name, content, args.force)
|
||||
|
||||
progress_path = goal_dir / "progress.jsonl"
|
||||
if not progress_path.exists() or args.force:
|
||||
write_file(progress_path, progress_entry(args.title, args.objective), args.force)
|
||||
|
||||
print(goal_dir)
|
||||
for name in sorted([*files.keys(), "progress.jsonl"]):
|
||||
print(goal_dir / name)
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
Reference in New Issue
Block a user