Add plannotator extension v0.19.10
This commit is contained in:
229
extensions/plannotator/README.md
Normal file
229
extensions/plannotator/README.md
Normal file
@@ -0,0 +1,229 @@
|
||||
# Plannotator for Pi
|
||||
|
||||
Plannotator integration for the [Pi coding agent](https://github.com/badlogic/pi-mono/tree/main/packages/coding-agent). Adds file-based plan mode with a visual browser UI for reviewing, annotating, and approving agent plans.
|
||||
|
||||
## Install
|
||||
|
||||
**From npm** (recommended):
|
||||
|
||||
```bash
|
||||
pi install npm:@plannotator/pi-extension
|
||||
```
|
||||
|
||||
**From source:**
|
||||
|
||||
```bash
|
||||
git clone https://github.com/backnotprop/plannotator.git
|
||||
pi install ./plannotator/apps/pi-extension
|
||||
```
|
||||
|
||||
**Try without installing:**
|
||||
|
||||
```bash
|
||||
pi -e npm:@plannotator/pi-extension
|
||||
```
|
||||
|
||||
## Build from source
|
||||
|
||||
If installing from a local clone, build the HTML assets first:
|
||||
|
||||
```bash
|
||||
cd plannotator
|
||||
bun install
|
||||
bun run build:pi
|
||||
```
|
||||
|
||||
This builds the plan review and code review UIs and copies them into `apps/pi-extension/`.
|
||||
|
||||
## Usage
|
||||
|
||||
### Plan mode
|
||||
|
||||
Start Pi in plan mode:
|
||||
|
||||
```bash
|
||||
pi --plan
|
||||
```
|
||||
|
||||
Or toggle it during a session with `/plannotator` or `Ctrl+Alt+P`. The command accepts an optional file path argument (`/plannotator plans/auth.md`) or prompts you to choose one interactively.
|
||||
|
||||
In plan mode the agent is restricted — destructive commands are blocked, writes are limited to the plan file. It explores your codebase, then writes a plan using markdown checklists:
|
||||
|
||||
```markdown
|
||||
- [ ] Add validation to the login form
|
||||
- [ ] Write tests for the new validation logic
|
||||
- [ ] Update error messages in the UI
|
||||
```
|
||||
|
||||
When the agent calls `plannotator_submit_plan`, the Plannotator UI opens in your browser. You can:
|
||||
|
||||
- **Approve** the plan to begin execution
|
||||
- **Deny with annotations** to send structured feedback back to the agent
|
||||
- **Approve with notes** to proceed but include implementation guidance
|
||||
|
||||
The agent iterates on the plan until you approve, then executes with full tool access. On resubmission, Plan Diff highlights what changed since the previous version.
|
||||
|
||||
### Configuring per-phase behavior
|
||||
|
||||
Plannotator loads configuration in three layers:
|
||||
|
||||
1. Built-in base config shipped with the package: `plannotator.json`
|
||||
2. Global user config: `~/.pi/agent/plannotator.json`
|
||||
3. Project-local config: `<cwd>/.pi/plannotator.json`
|
||||
|
||||
Later layers overwrite earlier ones. If a field is omitted, it inherits the value from lower-precedence layers. If a value is set to `null`, an empty string, or an empty array, it clears the inherited value instead of merging it. You can also set `defaults` or an entire phase object to `null` to clear all inherited settings from lower-precedence layers.
|
||||
|
||||
#### Top-level shape
|
||||
|
||||
```json
|
||||
{
|
||||
"defaults": {
|
||||
"model": { "provider": "anthropic", "id": "claude-sonnet-4-5" },
|
||||
"thinking": "medium",
|
||||
"activeTools": ["read", "bash"],
|
||||
"statusLabel": "Ready",
|
||||
"systemPrompt": "Optional prompt template"
|
||||
},
|
||||
"phases": {
|
||||
"planning": {
|
||||
"model": null,
|
||||
"thinking": null,
|
||||
"activeTools": ["grep", "find", "ls", "plannotator_submit_plan"],
|
||||
"statusLabel": "⏸ plan",
|
||||
"systemPrompt": "[PLANNING]\nPlan file: ${planFilePath}"
|
||||
},
|
||||
"executing": {
|
||||
"model": { "provider": "anthropic", "id": "claude-sonnet-4-5" },
|
||||
"thinking": "high",
|
||||
"activeTools": [],
|
||||
"statusLabel": "",
|
||||
"systemPrompt": "[EXECUTING]\nRemaining steps:\n${todoList}"
|
||||
},
|
||||
"reviewing": {
|
||||
"systemPrompt": "..."
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Option reference
|
||||
|
||||
| Option | Type | Meaning |
|
||||
|--------|------|---------|
|
||||
| `defaults` | object | Base values applied to every phase before phase-specific overrides |
|
||||
| `phases` | object | Phase-specific overrides |
|
||||
| `phases.planning` | object | Settings for planning mode |
|
||||
| `phases.executing` | object | Settings for execution mode |
|
||||
| `phases.reviewing` | object | Reserved for future review-mode customization |
|
||||
| `model` | `{ provider, id }` \| `null` | Sets the model for the phase; `null` leaves the current model unchanged |
|
||||
| `thinking` | `minimal` \| `low` \| `medium` \| `high` \| `xhigh` \| `null` | Sets the thinking level; `null` leaves the current level unchanged |
|
||||
| `activeTools` | string[] \| `null` | Extra tools to enable for the phase; `[]` or `null` means no extra phase tools |
|
||||
| `statusLabel` | string \| `null` | Optional UI label for the phase; empty/null clears it |
|
||||
| `systemPrompt` | string \| `null` | Phase system prompt template; empty/null disables prompt injection |
|
||||
|
||||
#### Prompt variables
|
||||
|
||||
Use these inside `systemPrompt` strings:
|
||||
|
||||
- `${planFilePath}` — current plan file path
|
||||
- `${todoList}` — remaining checklist items as markdown checkboxes
|
||||
- `${completedCount}` — completed checklist count
|
||||
- `${totalCount}` — total checklist count
|
||||
- `${remainingCount}` — remaining checklist count
|
||||
- `${phase}` — current runtime phase (`planning`, `executing`, `reviewing`, or `idle`)
|
||||
|
||||
#### Behavior notes
|
||||
|
||||
- Unknown template variables trigger a warning in the UI and are rendered as empty strings.
|
||||
- `activeTools` are additive with the tools currently active in the session, so Plannotator still preserves tools provided by other extensions.
|
||||
- Execution progress remains dynamic (`[DONE:n]` + checklist tracking), even if `statusLabel` is set.
|
||||
|
||||
#### Example files
|
||||
|
||||
- Built-in base config shipped with the package: `apps/pi-extension/plannotator.json`
|
||||
- Global user override: `~/.pi/agent/plannotator.json`
|
||||
- Project-local override: `<cwd>/.pi/plannotator.json`
|
||||
|
||||
### Code review
|
||||
|
||||
Run `/plannotator-review` to open your current git changes in the code review UI. Annotate specific lines, switch between diff views (uncommitted, staged, last commit, branch), and submit feedback that gets sent to the agent.
|
||||
|
||||
### Shared Plannotator event API
|
||||
|
||||
Plannotator also listens on the shared `plannotator:request` event channel so other extensions can reuse the same browser review flows without importing Plannotator internals.
|
||||
|
||||
Supported actions and payloads:
|
||||
|
||||
- `plan-review`: `{ planContent, planFilePath? }`
|
||||
- `review-status`: `{ reviewId }`
|
||||
- `code-review`: `{ cwd?, defaultBranch?, diffType? }`
|
||||
- `annotate`: `{ filePath, markdown?, mode?, folderPath? }`
|
||||
- `annotate-last`: `{ markdown? }`
|
||||
- `archive`: `{ customPlanPath? }`
|
||||
|
||||
Plan review is asynchronous:
|
||||
|
||||
- callers send `plannotator:request` with action `plan-review`
|
||||
- Plannotator opens the browser review and immediately responds with `{ status: "handled", result: { status: "pending", reviewId } }`
|
||||
- when the human approves or rejects in the browser, Plannotator emits `plannotator:review-result` with `{ reviewId, approved, feedback, savedPath?, agentSwitch?, permissionMode? }`
|
||||
- callers can query `review-status` with the same `reviewId` to recover from startup races or session restarts
|
||||
|
||||
The other shared actions remain request/response flows. Payloads are intentionally minimal and only include fields the shared implementation actually uses.
|
||||
|
||||
### Markdown annotation
|
||||
|
||||
Run `/plannotator-annotate <file.md>` to open any markdown file in the annotation UI. Useful for reviewing documentation or design specs with the agent.
|
||||
|
||||
### Annotate last message
|
||||
|
||||
Run `/plannotator-last` to annotate the agent's most recent response. The message opens in the annotation UI where you can highlight text, add comments, and send structured feedback back to the agent.
|
||||
|
||||
### Archive browser
|
||||
|
||||
The Plannotator archive browser is available through the shared event API as `archive`, which opens the saved plan/decision browser for future callers. The orchestrator does not expose a dedicated archive command yet.
|
||||
|
||||
### Progress tracking
|
||||
|
||||
During execution, the agent marks completed steps with `[DONE:n]` markers. Progress is shown in the status line and as a checklist widget in the terminal.
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `/plannotator` | Toggle plan mode. The agent writes a markdown plan file anywhere in the working directory and submits its path |
|
||||
| `/plannotator-status` | Show current phase, plan file, and progress |
|
||||
| `/plannotator-review` | Open code review UI for current changes |
|
||||
| `/plannotator-annotate <file>` | Open markdown file in annotation UI |
|
||||
| `/plannotator-last` | Annotate the last assistant message |
|
||||
|
||||
## Flags
|
||||
|
||||
| Flag | Description |
|
||||
|------|-------------|
|
||||
| `--plan` | Start in plan mode |
|
||||
|
||||
## Keyboard shortcuts
|
||||
|
||||
| Shortcut | Description |
|
||||
|----------|-------------|
|
||||
| `Ctrl+Alt+P` | Toggle plan mode |
|
||||
|
||||
## How it works
|
||||
|
||||
The extension manages a state machine: **idle** → **planning** → **executing** → **idle**.
|
||||
|
||||
During **planning**:
|
||||
- All tools from other extensions remain available
|
||||
- Bash is unrestricted — the agent is guided by the system prompt not to run destructive commands
|
||||
- Writes and edits restricted to the plan file only
|
||||
|
||||
During **executing**:
|
||||
- Full tool access: `read`, `bash`, `edit`, `write`
|
||||
- Progress tracked via `[DONE:n]` markers in agent responses
|
||||
- Plan re-read from disk each turn to stay current
|
||||
|
||||
State persists across session restarts via Pi's `appendEntry` API.
|
||||
|
||||
## Requirements
|
||||
|
||||
- [Pi](https://github.com/mariozechner/pi) >= 0.53.0
|
||||
74
extensions/plannotator/assistant-message.ts
Normal file
74
extensions/plannotator/assistant-message.ts
Normal file
@@ -0,0 +1,74 @@
|
||||
import type { ExtensionContext } from "@mariozechner/pi-coding-agent";
|
||||
|
||||
type AssistantTextBlock = { type?: string; text?: string };
|
||||
|
||||
type AssistantMessageLike = {
|
||||
role?: unknown;
|
||||
content?: unknown;
|
||||
};
|
||||
|
||||
type SessionEntryLike = {
|
||||
id: string;
|
||||
type: string;
|
||||
message?: AssistantMessageLike;
|
||||
};
|
||||
|
||||
export type LastAssistantMessageSnapshot = {
|
||||
entryId: string;
|
||||
text: string;
|
||||
};
|
||||
|
||||
function isAssistantMessage(message: AssistantMessageLike): message is { role: "assistant"; content: AssistantTextBlock[] } {
|
||||
return message.role === "assistant" && Array.isArray(message.content);
|
||||
}
|
||||
|
||||
function getTextContent(message: { content: AssistantTextBlock[] }): string {
|
||||
return message.content
|
||||
.filter((block): block is { type: "text"; text: string } => block.type === "text")
|
||||
.map((block) => block.text)
|
||||
.join("\n");
|
||||
}
|
||||
|
||||
function isRecord(value: unknown): value is Record<string, unknown> {
|
||||
return typeof value === "object" && value !== null;
|
||||
}
|
||||
|
||||
export function getAssistantMessageText(message: unknown): string | null {
|
||||
if (!isRecord(message)) return null;
|
||||
const candidate = { role: message.role, content: message.content };
|
||||
if (!isAssistantMessage(candidate)) return null;
|
||||
const text = getTextContent(candidate);
|
||||
return text.trim() ? text : null;
|
||||
}
|
||||
|
||||
function getCurrentBranch(ctx: ExtensionContext): SessionEntryLike[] {
|
||||
return ctx.sessionManager.getBranch() as SessionEntryLike[];
|
||||
}
|
||||
|
||||
export function getLastAssistantMessageSnapshot(ctx: ExtensionContext): LastAssistantMessageSnapshot | null {
|
||||
// "Last" means the active conversation branch, not the newest message anywhere
|
||||
// in the append-only session file.
|
||||
const branch = getCurrentBranch(ctx);
|
||||
for (let i = branch.length - 1; i >= 0; i--) {
|
||||
const entry = branch[i];
|
||||
if (entry.type === "message" && entry.message) {
|
||||
const text = getAssistantMessageText(entry.message);
|
||||
if (text) return { entryId: entry.id, text };
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
export function getLastAssistantMessageText(ctx: ExtensionContext): string | null {
|
||||
return getLastAssistantMessageSnapshot(ctx)?.text ?? null;
|
||||
}
|
||||
|
||||
export function hasSessionMovedPastEntry(ctx: ExtensionContext, entryId: string): boolean {
|
||||
if (!ctx.isIdle()) return true;
|
||||
|
||||
const branch = getCurrentBranch(ctx);
|
||||
const index = branch.findIndex((entry) => entry.id === entryId);
|
||||
if (index === -1) return true;
|
||||
|
||||
return branch.slice(index + 1).some((entry) => entry.type === "message");
|
||||
}
|
||||
166
extensions/plannotator/config.test.ts
Normal file
166
extensions/plannotator/config.test.ts
Normal file
@@ -0,0 +1,166 @@
|
||||
import { afterEach, describe, expect, test } from "bun:test";
|
||||
import { mkdirSync, mkdtempSync, rmSync, writeFileSync } from "node:fs";
|
||||
import { tmpdir } from "node:os";
|
||||
import { join } from "node:path";
|
||||
import { loadPlannotatorConfig, formatTodoList, renderTemplate, resolvePhaseProfile } from "./config";
|
||||
|
||||
const tempDirs: string[] = [];
|
||||
const originalHome = process.env.HOME;
|
||||
|
||||
function makeTempDir(prefix: string): string {
|
||||
const dir = mkdtempSync(join(tmpdir(), prefix));
|
||||
tempDirs.push(dir);
|
||||
return dir;
|
||||
}
|
||||
|
||||
afterEach(() => {
|
||||
if (originalHome === undefined) {
|
||||
delete process.env.HOME;
|
||||
} else {
|
||||
process.env.HOME = originalHome;
|
||||
}
|
||||
|
||||
for (const dir of tempDirs.splice(0)) {
|
||||
rmSync(dir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
describe("plannotator config", () => {
|
||||
test("loads the shipped internal base config", () => {
|
||||
const cwdDir = makeTempDir("plannotator-config-base-");
|
||||
process.env.HOME = makeTempDir("plannotator-config-home-base-");
|
||||
|
||||
const loaded = loadPlannotatorConfig(cwdDir);
|
||||
const planning = resolvePhaseProfile(loaded.config, "planning");
|
||||
|
||||
expect(loaded.warnings).toEqual([]);
|
||||
expect(planning.statusLabel).toBe("⏸ plan");
|
||||
expect(planning.activeTools).toEqual(["grep", "find", "ls", "plannotator_submit_plan"]);
|
||||
});
|
||||
|
||||
test("allows a project config to clear an inherited phase with null", () => {
|
||||
const homeDir = makeTempDir("plannotator-config-home-null-");
|
||||
const cwdDir = makeTempDir("plannotator-config-cwd-null-");
|
||||
process.env.HOME = homeDir;
|
||||
|
||||
const globalConfigDir = join(homeDir, ".pi", "agent");
|
||||
const projectConfigDir = join(cwdDir, ".pi");
|
||||
mkdirSync(globalConfigDir, { recursive: true });
|
||||
mkdirSync(projectConfigDir, { recursive: true });
|
||||
writeFileSync(
|
||||
join(globalConfigDir, "plannotator.json"),
|
||||
JSON.stringify({
|
||||
phases: { planning: { statusLabel: "global", activeTools: ["bash"] } },
|
||||
}),
|
||||
"utf-8",
|
||||
);
|
||||
writeFileSync(
|
||||
join(projectConfigDir, "plannotator.json"),
|
||||
JSON.stringify({
|
||||
phases: { planning: null },
|
||||
}),
|
||||
"utf-8",
|
||||
);
|
||||
|
||||
const loaded = loadPlannotatorConfig(cwdDir);
|
||||
const planning = resolvePhaseProfile(loaded.config, "planning");
|
||||
|
||||
expect(loaded.warnings).toEqual([]);
|
||||
expect(planning.statusLabel).toBeUndefined();
|
||||
expect(planning.activeTools).toBeUndefined();
|
||||
});
|
||||
|
||||
test("loads global and project configs with project precedence", () => {
|
||||
const homeDir = makeTempDir("plannotator-config-home-");
|
||||
const cwdDir = makeTempDir("plannotator-config-cwd-");
|
||||
process.env.HOME = homeDir;
|
||||
|
||||
const globalConfigDir = join(homeDir, ".pi", "agent");
|
||||
const projectConfigDir = join(cwdDir, ".pi");
|
||||
mkdirSync(globalConfigDir, { recursive: true });
|
||||
mkdirSync(projectConfigDir, { recursive: true });
|
||||
writeFileSync(
|
||||
join(globalConfigDir, "plannotator.json"),
|
||||
JSON.stringify({
|
||||
defaults: {
|
||||
thinking: "low",
|
||||
model: { provider: "anthropic", id: "claude-sonnet-4-5" },
|
||||
},
|
||||
phases: { planning: { statusLabel: "global", activeTools: ["bash"] } },
|
||||
}),
|
||||
"utf-8",
|
||||
);
|
||||
writeFileSync(
|
||||
join(projectConfigDir, "plannotator.json"),
|
||||
JSON.stringify({
|
||||
defaults: { thinking: null, model: null },
|
||||
phases: { planning: { statusLabel: "project", activeTools: [] } },
|
||||
}),
|
||||
"utf-8",
|
||||
);
|
||||
|
||||
const loaded = loadPlannotatorConfig(cwdDir);
|
||||
const planning = resolvePhaseProfile(loaded.config, "planning");
|
||||
|
||||
expect(loaded.warnings).toEqual([]);
|
||||
expect(planning.thinking).toBeUndefined();
|
||||
expect(planning.model).toBeUndefined();
|
||||
expect(planning.statusLabel).toBe("project");
|
||||
expect(planning.activeTools).toEqual([]);
|
||||
});
|
||||
|
||||
test("treats empty strings as clearing values", () => {
|
||||
const profile = resolvePhaseProfile(
|
||||
{
|
||||
defaults: { statusLabel: "base", systemPrompt: "base prompt", activeTools: ["bash"] },
|
||||
phases: { planning: { statusLabel: "", systemPrompt: "", activeTools: [] } },
|
||||
},
|
||||
"planning",
|
||||
);
|
||||
|
||||
expect(profile.statusLabel).toBeUndefined();
|
||||
expect(profile.systemPrompt).toBeUndefined();
|
||||
expect(profile.activeTools).toEqual([]);
|
||||
});
|
||||
|
||||
test("allows clearing an entire phase with null", () => {
|
||||
const profile = resolvePhaseProfile(
|
||||
{
|
||||
defaults: { thinking: "low", activeTools: ["bash"], statusLabel: "base" },
|
||||
phases: { planning: null },
|
||||
},
|
||||
"planning",
|
||||
);
|
||||
|
||||
expect(profile.thinking).toBe("low");
|
||||
expect(profile.activeTools).toEqual(["bash"]);
|
||||
expect(profile.statusLabel).toBe("base");
|
||||
});
|
||||
|
||||
test("renders prompt templates and reports unknown variables", () => {
|
||||
const rendered = renderTemplate("Hello ${name} ${missing}", {
|
||||
planFilePath: "PLAN.md",
|
||||
todoList: "- [ ] A",
|
||||
completedCount: 1,
|
||||
totalCount: 2,
|
||||
remainingCount: 1,
|
||||
phase: "planning",
|
||||
});
|
||||
|
||||
expect(rendered.text).toBe("Hello ");
|
||||
expect(rendered.unknownVariables).toEqual(["name", "missing"]);
|
||||
});
|
||||
|
||||
test("formats todo lists from checklist items", () => {
|
||||
const stats = formatTodoList([
|
||||
{ step: 1, text: "First", completed: true },
|
||||
{ step: 2, text: "Second", completed: false },
|
||||
{ step: 3, text: "Third", completed: false },
|
||||
]);
|
||||
|
||||
expect(stats.completedCount).toBe(1);
|
||||
expect(stats.totalCount).toBe(3);
|
||||
expect(stats.remainingCount).toBe(2);
|
||||
expect(stats.todoList).toBe("- [ ] 2. Second\n- [ ] 3. Third");
|
||||
});
|
||||
});
|
||||
318
extensions/plannotator/config.ts
Normal file
318
extensions/plannotator/config.ts
Normal file
@@ -0,0 +1,318 @@
|
||||
import { existsSync, readFileSync } from "node:fs";
|
||||
import { dirname, join } from "node:path";
|
||||
import { fileURLToPath } from "node:url";
|
||||
import { homedir } from "node:os";
|
||||
import type { ThinkingLevel } from "@mariozechner/pi-agent-core";
|
||||
|
||||
export type PhaseName = "planning" | "executing" | "reviewing";
|
||||
export type RuntimePhase = PhaseName | "idle";
|
||||
|
||||
export interface PhaseModelRef {
|
||||
provider: string;
|
||||
id: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Config values loaded from JSON can intentionally clear inherited values.
|
||||
*
|
||||
* - `null` clears a value from a parent config.
|
||||
* - `[]` clears active tools.
|
||||
* - `""` clears string values.
|
||||
*/
|
||||
export interface PhaseProfile {
|
||||
model?: PhaseModelRef | null;
|
||||
thinking?: ThinkingLevel | null;
|
||||
activeTools?: string[] | null;
|
||||
statusLabel?: string | null;
|
||||
systemPrompt?: string | null;
|
||||
}
|
||||
|
||||
export interface PlannotatorConfig {
|
||||
defaults?: PhaseProfile | null;
|
||||
phases?: Partial<Record<PhaseName, PhaseProfile | null>>;
|
||||
}
|
||||
|
||||
export interface LoadedPlannotatorConfig {
|
||||
config: PlannotatorConfig;
|
||||
warnings: string[];
|
||||
}
|
||||
|
||||
export interface ResolvedPhaseProfile {
|
||||
model?: PhaseModelRef;
|
||||
thinking?: ThinkingLevel;
|
||||
activeTools?: string[];
|
||||
statusLabel?: string;
|
||||
systemPrompt?: string;
|
||||
}
|
||||
|
||||
export interface PromptVariables {
|
||||
planFilePath: string;
|
||||
todoList: string;
|
||||
completedCount: number;
|
||||
totalCount: number;
|
||||
remainingCount: number;
|
||||
phase: RuntimePhase;
|
||||
}
|
||||
|
||||
export interface PromptRenderResult {
|
||||
text: string;
|
||||
unknownVariables: string[];
|
||||
}
|
||||
|
||||
const INTERNAL_CONFIG_PATH = join(dirname(fileURLToPath(import.meta.url)), "plannotator.json");
|
||||
const PHASES: PhaseName[] = ["planning", "executing", "reviewing"];
|
||||
const THINKING_LEVELS = new Set<string>(["minimal", "low", "medium", "high", "xhigh"]);
|
||||
|
||||
function getAgentConfigDir(): string {
|
||||
const envDir = process.env.PI_CODING_AGENT_DIR;
|
||||
if (envDir) return envDir;
|
||||
return join(process.env.HOME || process.env.USERPROFILE || homedir(), ".pi", "agent");
|
||||
}
|
||||
|
||||
function isRecord(value: unknown): value is Record<string, unknown> {
|
||||
return typeof value === "object" && value !== null && !Array.isArray(value);
|
||||
}
|
||||
|
||||
function readJsonFile(path: string): { data?: unknown; error?: string } {
|
||||
if (!existsSync(path)) return {};
|
||||
|
||||
try {
|
||||
return { data: JSON.parse(readFileSync(path, "utf-8")) };
|
||||
} catch (error) {
|
||||
return { error: `Failed to parse ${path}: ${error instanceof Error ? error.message : String(error)}` };
|
||||
}
|
||||
}
|
||||
|
||||
function normalizeModel(value: unknown): PhaseModelRef | null | undefined {
|
||||
if (value === null) return null;
|
||||
if (!isRecord(value)) return undefined;
|
||||
|
||||
const provider = typeof value.provider === "string" ? value.provider.trim() : "";
|
||||
const id = typeof value.id === "string" ? value.id.trim() : "";
|
||||
if (!provider || !id) return undefined;
|
||||
return { provider, id };
|
||||
}
|
||||
|
||||
function normalizeThinking(value: unknown): ThinkingLevel | null | undefined {
|
||||
if (value === null) return null;
|
||||
if (typeof value !== "string") return undefined;
|
||||
const trimmed = value.trim();
|
||||
if (!trimmed) return null;
|
||||
|
||||
return THINKING_LEVELS.has(trimmed as ThinkingLevel) ? (trimmed as ThinkingLevel) : undefined;
|
||||
}
|
||||
|
||||
function normalizeTools(value: unknown): string[] | null | undefined {
|
||||
if (value === null) return null;
|
||||
if (!Array.isArray(value)) return undefined;
|
||||
if (value.length === 0) return [];
|
||||
|
||||
const tools = value.filter((tool): tool is string => typeof tool === "string" && tool.trim().length > 0);
|
||||
return tools.length > 0 ? tools : undefined;
|
||||
}
|
||||
|
||||
function normalizeLabel(value: unknown): string | null | undefined {
|
||||
if (value === null) return null;
|
||||
if (typeof value !== "string") return undefined;
|
||||
const trimmed = value.trim();
|
||||
return trimmed.length > 0 ? trimmed : null;
|
||||
}
|
||||
|
||||
function normalizePrompt(value: unknown): string | null | undefined {
|
||||
if (value === null) return null;
|
||||
if (typeof value !== "string") return undefined;
|
||||
return value.length > 0 ? value : null;
|
||||
}
|
||||
|
||||
function normalizeProfile(raw: unknown): PhaseProfile | null | undefined {
|
||||
if (raw === null) return null;
|
||||
if (!isRecord(raw)) return undefined;
|
||||
|
||||
const profile: PhaseProfile = {};
|
||||
|
||||
if ("model" in raw) profile.model = normalizeModel(raw.model);
|
||||
if ("thinking" in raw) profile.thinking = normalizeThinking(raw.thinking);
|
||||
if ("thinkingLevel" in raw && profile.thinking === undefined) profile.thinking = normalizeThinking(raw.thinkingLevel);
|
||||
if ("activeTools" in raw) profile.activeTools = normalizeTools(raw.activeTools);
|
||||
if ("statusLabel" in raw) profile.statusLabel = normalizeLabel(raw.statusLabel);
|
||||
if ("systemPrompt" in raw) profile.systemPrompt = normalizePrompt(raw.systemPrompt);
|
||||
|
||||
return profile;
|
||||
}
|
||||
|
||||
function cloneProfile(profile: PhaseProfile | null | undefined): PhaseProfile | null | undefined {
|
||||
if (profile === null || profile === undefined) return profile;
|
||||
return { ...profile, activeTools: profile.activeTools ? [...profile.activeTools] : profile.activeTools };
|
||||
}
|
||||
|
||||
function mergeProfile(base: PhaseProfile | null | undefined, override: PhaseProfile | null | undefined): PhaseProfile | null | undefined {
|
||||
if (override === null) return null;
|
||||
if (override === undefined) return cloneProfile(base);
|
||||
if (base === null || base === undefined) return cloneProfile(override);
|
||||
|
||||
const merged: PhaseProfile = {
|
||||
model: override.model !== undefined ? override.model : base.model,
|
||||
thinking: override.thinking !== undefined ? override.thinking : base.thinking,
|
||||
activeTools: override.activeTools !== undefined ? override.activeTools : base.activeTools,
|
||||
statusLabel: override.statusLabel !== undefined ? override.statusLabel : base.statusLabel,
|
||||
systemPrompt: override.systemPrompt !== undefined ? override.systemPrompt : base.systemPrompt,
|
||||
};
|
||||
|
||||
return merged;
|
||||
}
|
||||
|
||||
function mergeConfig(base: PlannotatorConfig, override: PlannotatorConfig): PlannotatorConfig {
|
||||
const phases: Partial<Record<PhaseName, PhaseProfile | null>> = {};
|
||||
for (const phase of PHASES) {
|
||||
const merged = mergeProfile(base.phases?.[phase], override.phases?.[phase]);
|
||||
if (merged !== undefined) phases[phase] = merged;
|
||||
}
|
||||
|
||||
return {
|
||||
defaults: mergeProfile(base.defaults, override.defaults),
|
||||
phases: Object.keys(phases).length > 0 ? phases : undefined,
|
||||
};
|
||||
}
|
||||
|
||||
function loadConfigSource(path: string): { config: PlannotatorConfig; warning?: string } {
|
||||
const parsed = readJsonFile(path);
|
||||
if (parsed.error) {
|
||||
return { config: {}, warning: parsed.error };
|
||||
}
|
||||
|
||||
const raw = parsed.data;
|
||||
if (!isRecord(raw)) return { config: {} };
|
||||
|
||||
const config: PlannotatorConfig = {};
|
||||
if ("defaults" in raw) config.defaults = normalizeProfile(raw.defaults);
|
||||
|
||||
if ("phases" in raw && isRecord(raw.phases)) {
|
||||
const phases: Partial<Record<PhaseName, PhaseProfile | null>> = {};
|
||||
for (const phase of PHASES) {
|
||||
const normalized = normalizeProfile(raw.phases[phase]);
|
||||
if (normalized !== undefined) phases[phase] = normalized;
|
||||
}
|
||||
if (Object.keys(phases).length > 0) config.phases = phases;
|
||||
}
|
||||
|
||||
return { config };
|
||||
}
|
||||
|
||||
export function loadPlannotatorConfig(cwd: string): LoadedPlannotatorConfig {
|
||||
const warnings: string[] = [];
|
||||
|
||||
const internal = loadConfigSource(INTERNAL_CONFIG_PATH);
|
||||
if (internal.warning) warnings.push(internal.warning);
|
||||
|
||||
const globalPath = join(getAgentConfigDir(), "plannotator.json");
|
||||
const globalConfig = loadConfigSource(globalPath);
|
||||
if (globalConfig.warning) warnings.push(globalConfig.warning);
|
||||
|
||||
const projectPath = join(cwd, ".pi", "plannotator.json");
|
||||
const projectConfig = loadConfigSource(projectPath);
|
||||
if (projectConfig.warning) warnings.push(projectConfig.warning);
|
||||
|
||||
const merged = mergeConfig(mergeConfig(internal.config, globalConfig.config), projectConfig.config);
|
||||
return { config: merged, warnings };
|
||||
}
|
||||
|
||||
export function resolvePhaseProfile(config: PlannotatorConfig, phase: PhaseName): ResolvedPhaseProfile {
|
||||
const defaults = config.defaults ?? {};
|
||||
const phaseConfig = config.phases?.[phase] ?? {};
|
||||
|
||||
return {
|
||||
model: resolveModel(defaults.model, phaseConfig.model),
|
||||
thinking: resolveThinking(defaults.thinking, phaseConfig.thinking),
|
||||
activeTools: resolveTools(defaults.activeTools, phaseConfig.activeTools),
|
||||
statusLabel: resolveString(defaults.statusLabel, phaseConfig.statusLabel),
|
||||
systemPrompt: resolveString(defaults.systemPrompt, phaseConfig.systemPrompt),
|
||||
};
|
||||
}
|
||||
|
||||
function resolveModel(base: PhaseModelRef | null | undefined, override: PhaseModelRef | null | undefined): PhaseModelRef | undefined {
|
||||
if (override !== undefined) {
|
||||
return override ?? undefined;
|
||||
}
|
||||
return base ?? undefined;
|
||||
}
|
||||
|
||||
function resolveThinking(base: ThinkingLevel | null | undefined, override: ThinkingLevel | null | undefined): ThinkingLevel | undefined {
|
||||
if (override !== undefined) {
|
||||
return override ?? undefined;
|
||||
}
|
||||
return base ?? undefined;
|
||||
}
|
||||
|
||||
function resolveTools(base: string[] | null | undefined, override: string[] | null | undefined): string[] | undefined {
|
||||
if (override !== undefined) {
|
||||
if (override === null) return [];
|
||||
return [...override];
|
||||
}
|
||||
if (base === null) return [];
|
||||
return base ? [...base] : undefined;
|
||||
}
|
||||
|
||||
function resolveString(base: string | null | undefined, override: string | null | undefined): string | undefined {
|
||||
if (override !== undefined) {
|
||||
if (override === null || override === "") return undefined;
|
||||
return override;
|
||||
}
|
||||
return base ?? undefined;
|
||||
}
|
||||
|
||||
export function buildPromptVariables(options: {
|
||||
planFilePath: string;
|
||||
phase: RuntimePhase;
|
||||
totalCount: number;
|
||||
completedCount: number;
|
||||
remainingCount?: number;
|
||||
todoList?: string;
|
||||
}): PromptVariables {
|
||||
const totalCount = options.totalCount;
|
||||
const completedCount = options.completedCount;
|
||||
const remainingCount = options.remainingCount ?? Math.max(totalCount - completedCount, 0);
|
||||
|
||||
return {
|
||||
planFilePath: options.planFilePath,
|
||||
todoList: options.todoList ?? "",
|
||||
completedCount,
|
||||
totalCount,
|
||||
remainingCount,
|
||||
phase: options.phase,
|
||||
};
|
||||
}
|
||||
|
||||
export function renderTemplate(template: string, vars: PromptVariables): PromptRenderResult {
|
||||
const unknownVariables = new Set<string>();
|
||||
const text = template.replace(/\$\{([a-zA-Z0-9_]+)\}/g, (_match, key: string) => {
|
||||
if (key in vars) {
|
||||
const value = vars[key as keyof PromptVariables];
|
||||
return value === undefined || value === null ? "" : String(value);
|
||||
}
|
||||
unknownVariables.add(key);
|
||||
return "";
|
||||
});
|
||||
|
||||
return { text, unknownVariables: [...unknownVariables] };
|
||||
}
|
||||
|
||||
export function formatTodoList(items: Array<{ step: number; text: string; completed: boolean }>): {
|
||||
todoList: string;
|
||||
completedCount: number;
|
||||
totalCount: number;
|
||||
remainingCount: number;
|
||||
} {
|
||||
const totalCount = items.length;
|
||||
const completedCount = items.filter((item) => item.completed).length;
|
||||
const remainingItems = items.filter((item) => !item.completed);
|
||||
const todoList = remainingItems.length
|
||||
? remainingItems.map((item) => `- [ ] ${item.step}. ${item.text}`).join("\n")
|
||||
: "";
|
||||
|
||||
return {
|
||||
todoList,
|
||||
completedCount,
|
||||
totalCount,
|
||||
remainingCount: remainingItems.length,
|
||||
};
|
||||
}
|
||||
147
extensions/plannotator/current-pi-session.ts
Normal file
147
extensions/plannotator/current-pi-session.ts
Normal file
@@ -0,0 +1,147 @@
|
||||
import type { ExtensionAPI, ExtensionContext } from "@mariozechner/pi-coding-agent";
|
||||
|
||||
type SendUserMessageContent = Parameters<ExtensionAPI["sendUserMessage"]>[0];
|
||||
type SendUserMessageOptions = Parameters<ExtensionAPI["sendUserMessage"]>[1];
|
||||
type NotificationType = "info" | "warning" | "error";
|
||||
|
||||
type CurrentPiSession = {
|
||||
token: symbol;
|
||||
sendUserMessage: (content: SendUserMessageContent, options?: SendUserMessageOptions) => void;
|
||||
notify?: (message: string, type?: NotificationType) => void;
|
||||
identity?: PiSessionIdentity;
|
||||
};
|
||||
|
||||
type CurrentPiSessionStore = {
|
||||
current?: CurrentPiSession;
|
||||
};
|
||||
|
||||
type PlannotatorGlobal = typeof globalThis & {
|
||||
__plannotatorCurrentPiSession?: CurrentPiSessionStore;
|
||||
};
|
||||
|
||||
export type CurrentPiSessionRegistration = {
|
||||
token: symbol;
|
||||
update: (ctx: ExtensionContext) => void;
|
||||
clear: () => void;
|
||||
};
|
||||
|
||||
export type PiSessionIdentity = {
|
||||
sessionId?: string;
|
||||
sessionFile?: string;
|
||||
sessionName?: string;
|
||||
cwd?: string;
|
||||
};
|
||||
|
||||
const globalStore = globalThis as PlannotatorGlobal;
|
||||
|
||||
function getStore(): CurrentPiSessionStore {
|
||||
globalStore.__plannotatorCurrentPiSession ??= {};
|
||||
return globalStore.__plannotatorCurrentPiSession;
|
||||
}
|
||||
|
||||
function getErrorMessage(err: unknown): string {
|
||||
return err instanceof Error ? err.message : String(err);
|
||||
}
|
||||
|
||||
export function getPiSessionIdentity(ctx: ExtensionContext): PiSessionIdentity {
|
||||
return {
|
||||
sessionId: ctx.sessionManager.getSessionId(),
|
||||
sessionFile: ctx.sessionManager.getSessionFile(),
|
||||
sessionName: ctx.sessionManager.getSessionName(),
|
||||
cwd: ctx.cwd,
|
||||
};
|
||||
}
|
||||
|
||||
function isDifferentSession(origin: PiSessionIdentity, current: PiSessionIdentity | undefined): boolean {
|
||||
if (!current) return false;
|
||||
if (origin.sessionId && current.sessionId) return origin.sessionId !== current.sessionId;
|
||||
if (origin.sessionFile && current.sessionFile) return origin.sessionFile !== current.sessionFile;
|
||||
return false;
|
||||
}
|
||||
|
||||
function setCurrentPiSession(token: symbol, pi: ExtensionAPI, ctx?: ExtensionContext): void {
|
||||
const current: CurrentPiSession = {
|
||||
token,
|
||||
sendUserMessage: (content, options) => {
|
||||
pi.sendUserMessage(content, options);
|
||||
},
|
||||
};
|
||||
if (ctx) {
|
||||
current.notify = (message, type = "info") => {
|
||||
ctx.ui.notify(message, type);
|
||||
};
|
||||
current.identity = getPiSessionIdentity(ctx);
|
||||
}
|
||||
getStore().current = current;
|
||||
}
|
||||
|
||||
export function registerCurrentPiSession(pi: ExtensionAPI): CurrentPiSessionRegistration {
|
||||
const token = Symbol("plannotator-current-pi-session");
|
||||
setCurrentPiSession(token, pi);
|
||||
return {
|
||||
token,
|
||||
update: (ctx) => {
|
||||
setCurrentPiSession(token, pi, ctx);
|
||||
},
|
||||
clear: () => {
|
||||
const store = getStore();
|
||||
if (store.current?.token === token) {
|
||||
store.current = undefined;
|
||||
}
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
export function notifyCurrentPiSession(
|
||||
message: string,
|
||||
type: NotificationType = "info",
|
||||
origin?: PiSessionIdentity,
|
||||
): boolean {
|
||||
const current = getStore().current;
|
||||
if (!current?.notify) return false;
|
||||
if (origin && !isDifferentSession(origin, current.identity)) return false;
|
||||
try {
|
||||
current.notify(message, type);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.error(`Plannotator current-session notification failed: ${getErrorMessage(err)}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
export function isCurrentPiSessionDifferentFrom(origin: PiSessionIdentity): boolean {
|
||||
return isDifferentSession(origin, getStore().current?.identity);
|
||||
}
|
||||
|
||||
function getCurrentPiSessionLabel(): string {
|
||||
const identity = getStore().current?.identity;
|
||||
if (!identity) return "unknown";
|
||||
return identity.sessionName || identity.sessionFile || identity.sessionId || "current active Pi session";
|
||||
}
|
||||
|
||||
export function withCurrentPiSessionFallbackHeader(content: SendUserMessageContent): SendUserMessageContent {
|
||||
if (typeof content !== "string") return content;
|
||||
return `This Plannotator feedback was submitted from a browser tab opened before Pi switched sessions. It is being delivered to ${getCurrentPiSessionLabel()} because the original Pi session is no longer active.
|
||||
|
||||
${content}`;
|
||||
}
|
||||
|
||||
export function sendUserMessageToCurrentPiSession(
|
||||
content: SendUserMessageContent,
|
||||
options?: SendUserMessageOptions,
|
||||
origin?: PiSessionIdentity,
|
||||
): { ok: true } | { ok: false; reason: "no-current" | "same-session" | "send-failed"; error: unknown } {
|
||||
const current = getStore().current;
|
||||
if (!current) {
|
||||
return { ok: false, reason: "no-current", error: new Error("No active Pi session is available.") };
|
||||
}
|
||||
if (origin && !isDifferentSession(origin, current.identity)) {
|
||||
return { ok: false, reason: "same-session", error: new Error("No different active Pi session is available.") };
|
||||
}
|
||||
try {
|
||||
current.sendUserMessage(content, options);
|
||||
return { ok: true };
|
||||
} catch (err) {
|
||||
return { ok: false, reason: "send-failed", error: err };
|
||||
}
|
||||
}
|
||||
133
extensions/plannotator/generated/agent-jobs.ts
Normal file
133
extensions/plannotator/generated/agent-jobs.ts
Normal file
@@ -0,0 +1,133 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/agent-jobs.ts
|
||||
/**
|
||||
* Agent Jobs — shared types, state machine, and SSE helpers.
|
||||
*
|
||||
* Runtime-agnostic: no node:fs, no node:http, no Bun APIs.
|
||||
* Both the Bun server handler and (future) Node handler import
|
||||
* this module and wrap it with their respective HTTP transport layers.
|
||||
*
|
||||
* Mirrors packages/shared/external-annotation.ts in structure.
|
||||
*/
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export type AgentJobStatus = "starting" | "running" | "done" | "failed" | "killed";
|
||||
|
||||
/**
|
||||
* Snapshot of the diff the reviewer was looking at when this job was launched.
|
||||
* Carried on the job so downstream UIs (agent-result panel "Copy All") export
|
||||
* the same `**Diff:** ...` header the job was actually run against — if the
|
||||
* reviewer switches the UI to a different diff afterwards, the job's snapshot
|
||||
* still reflects truth. Structurally compatible with the UI-side
|
||||
* `FeedbackDiffContext` in `packages/review-editor/utils/exportFeedback.ts`.
|
||||
*/
|
||||
export interface AgentJobDiffContext {
|
||||
mode: string;
|
||||
base?: string;
|
||||
worktreePath?: string | null;
|
||||
}
|
||||
|
||||
export interface AgentJobInfo {
|
||||
/** Unique job identifier (UUID). */
|
||||
id: string;
|
||||
/** Source identifier for external annotations — "agent-{id prefix}". */
|
||||
source: string;
|
||||
/** Provider that spawned this job — "claude", "codex", "tour", "shell", etc. */
|
||||
provider: string;
|
||||
/** Underlying engine used (e.g., "claude" or "codex"). Set when provider is "tour". */
|
||||
engine?: string;
|
||||
/** Model used (e.g., "sonnet", "opus"). Set when provider is "tour" with Claude engine. */
|
||||
model?: string;
|
||||
/** Claude --effort level (e.g., "low", "medium", "high", "xhigh", "max"). */
|
||||
effort?: string;
|
||||
/** Codex reasoning effort level (e.g., "high", "medium"). */
|
||||
reasoningEffort?: string;
|
||||
/** Whether Codex fast mode (service_tier=fast) was enabled. */
|
||||
fastMode?: boolean;
|
||||
/** Human-readable label for the job. */
|
||||
label: string;
|
||||
/** Current lifecycle status. */
|
||||
status: AgentJobStatus;
|
||||
/** Timestamp when the job was created. */
|
||||
startedAt: number;
|
||||
/** Timestamp when the job reached a terminal state. */
|
||||
endedAt?: number;
|
||||
/** Process exit code (set on done/failed). */
|
||||
exitCode?: number;
|
||||
/** Last ~500 chars of stderr on failure. */
|
||||
error?: string;
|
||||
/** The actual command that was spawned (for display/debug). */
|
||||
command: string[];
|
||||
/** Working directory where the process was spawned. */
|
||||
cwd?: string;
|
||||
/** The review prompt text (system + user message). Stored separately from command for providers that use stdin. */
|
||||
prompt?: string;
|
||||
/** Review summary set by the agent on completion. */
|
||||
summary?: {
|
||||
correctness: string;
|
||||
explanation: string;
|
||||
confidence: number;
|
||||
};
|
||||
/** PR URL at launch time — used to attribute findings to the correct PR. */
|
||||
prUrl?: string;
|
||||
/** PR diff scope at launch time — "layer" or "full-stack". */
|
||||
diffScope?: string;
|
||||
/** Diff context at launch time (see AgentJobDiffContext). */
|
||||
diffContext?: AgentJobDiffContext;
|
||||
}
|
||||
|
||||
export interface AgentCapability {
|
||||
id: string;
|
||||
name: string;
|
||||
available: boolean;
|
||||
}
|
||||
|
||||
export interface AgentCapabilities {
|
||||
mode: "plan" | "review" | "annotate";
|
||||
providers: AgentCapability[];
|
||||
/** True if at least one provider is available. */
|
||||
available: boolean;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// SSE event types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export type AgentJobEvent =
|
||||
| { type: "snapshot"; jobs: AgentJobInfo[] }
|
||||
| { type: "job:started"; job: AgentJobInfo }
|
||||
| { type: "job:updated"; job: AgentJobInfo }
|
||||
| { type: "job:completed"; job: AgentJobInfo }
|
||||
| { type: "job:log"; jobId: string; delta: string }
|
||||
| { type: "jobs:cleared" };
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// SSE helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Heartbeat comment to keep SSE connections alive (sent every 30s). */
|
||||
export const AGENT_HEARTBEAT_COMMENT = ":\n\n";
|
||||
|
||||
/** Interval in ms between heartbeat comments. */
|
||||
export const AGENT_HEARTBEAT_INTERVAL_MS = 30_000;
|
||||
|
||||
/** Encode an event as an SSE `data:` line. */
|
||||
export function serializeAgentSSEEvent(event: AgentJobEvent): string {
|
||||
return `data: ${JSON.stringify(event)}\n\n`;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Check if a status is terminal (no further transitions). */
|
||||
export function isTerminalStatus(status: AgentJobStatus): boolean {
|
||||
return status === "done" || status === "failed" || status === "killed";
|
||||
}
|
||||
|
||||
/** Generate the source identifier for a job from its ID. */
|
||||
export function jobSource(id: string): string {
|
||||
return "agent-" + id.slice(0, 8);
|
||||
}
|
||||
95
extensions/plannotator/generated/ai/base-session.ts
Normal file
95
extensions/plannotator/generated/ai/base-session.ts
Normal file
@@ -0,0 +1,95 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/base-session.ts
|
||||
/**
|
||||
* Shared session base class — extracts the common lifecycle, abort, and
|
||||
* ID-resolution logic that every AIProvider session needs.
|
||||
*
|
||||
* Concrete providers extend this and implement query().
|
||||
*/
|
||||
|
||||
import type { AIMessage, AISession } from "./types.ts";
|
||||
|
||||
export abstract class BaseSession implements AISession {
|
||||
readonly parentSessionId: string | null;
|
||||
onIdResolved?: (oldId: string, newId: string) => void;
|
||||
|
||||
protected _placeholderId: string;
|
||||
protected _resolvedId: string | null = null;
|
||||
protected _isActive = false;
|
||||
protected _currentAbort: AbortController | null = null;
|
||||
protected _queryGen = 0;
|
||||
protected _firstQuerySent = false;
|
||||
|
||||
constructor(opts: { parentSessionId: string | null; initialId?: string }) {
|
||||
this.parentSessionId = opts.parentSessionId;
|
||||
this._placeholderId = opts.initialId ?? crypto.randomUUID();
|
||||
}
|
||||
|
||||
get id(): string {
|
||||
return this._resolvedId ?? this._placeholderId;
|
||||
}
|
||||
|
||||
get isActive(): boolean {
|
||||
return this._isActive;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Query lifecycle helpers — call from concrete query() implementations
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Error message returned when a query is already active. */
|
||||
static readonly BUSY_ERROR: AIMessage = {
|
||||
type: "error",
|
||||
error:
|
||||
"A query is already in progress. Abort the current query before sending a new one.",
|
||||
code: "session_busy",
|
||||
};
|
||||
|
||||
/**
|
||||
* Call at the start of query(). Returns the generation number and abort
|
||||
* signal, or null if the session is busy.
|
||||
*/
|
||||
protected startQuery(): { gen: number; signal: AbortSignal } | null {
|
||||
if (this._isActive) return null;
|
||||
|
||||
const gen = ++this._queryGen;
|
||||
this._isActive = true;
|
||||
this._currentAbort = new AbortController();
|
||||
return { gen, signal: this._currentAbort.signal };
|
||||
}
|
||||
|
||||
/**
|
||||
* Call in the finally block of query(). Only clears state if the
|
||||
* generation matches (prevents a stale finally from clobbering a newer query).
|
||||
*/
|
||||
protected endQuery(gen: number): void {
|
||||
if (this._queryGen === gen) {
|
||||
this._isActive = false;
|
||||
this._currentAbort = null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Call when the provider resolves the real session ID from the backend.
|
||||
* Fires the onIdResolved callback so the SessionManager can remap its key.
|
||||
*/
|
||||
protected resolveId(newId: string): void {
|
||||
if (this._resolvedId) return; // Already resolved
|
||||
const oldId = this._placeholderId;
|
||||
this._resolvedId = newId;
|
||||
this.onIdResolved?.(oldId, newId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Abort the current in-flight query. Subclasses should call super.abort()
|
||||
* after any provider-specific cleanup.
|
||||
*/
|
||||
abort(): void {
|
||||
if (this._currentAbort) {
|
||||
this._currentAbort.abort();
|
||||
this._isActive = false;
|
||||
this._currentAbort = null;
|
||||
}
|
||||
}
|
||||
|
||||
abstract query(prompt: string): AsyncIterable<AIMessage>;
|
||||
}
|
||||
212
extensions/plannotator/generated/ai/context.ts
Normal file
212
extensions/plannotator/generated/ai/context.ts
Normal file
@@ -0,0 +1,212 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/context.ts
|
||||
/**
|
||||
* Context builders — translate Plannotator review state into system prompts
|
||||
* that give the AI session the right background for answering questions.
|
||||
*
|
||||
* These are provider-agnostic: any AIProvider implementation can use them
|
||||
* to build the system prompt it needs.
|
||||
*/
|
||||
|
||||
import type { AIContext } from "./types.ts";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Public API
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Build a system prompt from the given context.
|
||||
*
|
||||
* The prompt tells the AI:
|
||||
* - What role it plays (plan reviewer, code reviewer, etc.)
|
||||
* - The content it should reference (plan markdown, diff patch, file)
|
||||
* - Any annotations the user has already made
|
||||
* - That it's operating inside Plannotator (not a general coding session)
|
||||
*/
|
||||
export function buildSystemPrompt(ctx: AIContext): string {
|
||||
switch (ctx.mode) {
|
||||
case "plan-review":
|
||||
return buildPlanReviewPrompt(ctx);
|
||||
case "code-review":
|
||||
return buildCodeReviewPrompt(ctx);
|
||||
case "annotate":
|
||||
return buildAnnotatePrompt(ctx);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a compact context summary suitable for injecting into a fork prompt.
|
||||
*
|
||||
* When forking from a parent session, we don't need a full system prompt
|
||||
* (the parent's history already provides context). Instead, we inject a
|
||||
* short "you are now in Plannotator" preamble with the relevant content.
|
||||
*/
|
||||
export function buildForkPreamble(ctx: AIContext): string {
|
||||
const lines: string[] = [
|
||||
"The user is now reviewing your work in Plannotator and has a question.",
|
||||
"Answer concisely based on the conversation history and the context below.",
|
||||
"",
|
||||
];
|
||||
|
||||
switch (ctx.mode) {
|
||||
case "plan-review": {
|
||||
lines.push("## Current Plan Under Review");
|
||||
lines.push("");
|
||||
lines.push(truncate(ctx.plan.plan, MAX_PLAN_CHARS));
|
||||
if (ctx.plan.annotations) {
|
||||
lines.push("");
|
||||
lines.push("## User Annotations So Far");
|
||||
lines.push(ctx.plan.annotations);
|
||||
}
|
||||
break;
|
||||
}
|
||||
case "code-review": {
|
||||
if (ctx.review.filePath) {
|
||||
lines.push(`## Reviewing: ${ctx.review.filePath}`);
|
||||
}
|
||||
if (ctx.review.selectedCode) {
|
||||
lines.push("");
|
||||
lines.push("### Selected Code");
|
||||
lines.push("```");
|
||||
lines.push(ctx.review.selectedCode);
|
||||
lines.push("```");
|
||||
}
|
||||
if (ctx.review.lineRange) {
|
||||
const { start, end, side } = ctx.review.lineRange;
|
||||
lines.push(`Lines ${start}-${end} (${side} side)`);
|
||||
}
|
||||
lines.push("");
|
||||
lines.push("## Diff Patch");
|
||||
lines.push("```diff");
|
||||
lines.push(truncate(ctx.review.patch, MAX_DIFF_CHARS));
|
||||
lines.push("```");
|
||||
if (ctx.review.annotations) {
|
||||
lines.push("");
|
||||
lines.push("## User Annotations So Far");
|
||||
lines.push(ctx.review.annotations);
|
||||
}
|
||||
break;
|
||||
}
|
||||
case "annotate": {
|
||||
lines.push(`## Annotating: ${ctx.annotate.filePath}`);
|
||||
lines.push("");
|
||||
lines.push(truncate(ctx.annotate.content, MAX_PLAN_CHARS));
|
||||
if (ctx.annotate.annotations) {
|
||||
lines.push("");
|
||||
lines.push("## User Annotations So Far");
|
||||
lines.push(ctx.annotate.annotations);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return lines.join("\n");
|
||||
}
|
||||
|
||||
/**
|
||||
* Build the effective prompt for a query, prepending a preamble on the first
|
||||
* message. Used by providers that inject context via the prompt itself (Codex,
|
||||
* Pi) rather than a separate system-prompt channel (Claude).
|
||||
*/
|
||||
export function buildEffectivePrompt(
|
||||
userPrompt: string,
|
||||
preamble: string | null,
|
||||
firstQuerySent: boolean,
|
||||
): string {
|
||||
if (!firstQuerySent && preamble) {
|
||||
return `${preamble}\n\n---\n\nUser question: ${userPrompt}`;
|
||||
}
|
||||
return userPrompt;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Internals
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const MAX_PLAN_CHARS = 60_000;
|
||||
const MAX_DIFF_CHARS = 40_000;
|
||||
|
||||
function truncate(text: string, max: number): string {
|
||||
if (text.length <= max) return text;
|
||||
return `${text.slice(0, max)}\n\n... [truncated for context window]`;
|
||||
}
|
||||
|
||||
function buildPlanReviewPrompt(
|
||||
ctx: Extract<AIContext, { mode: "plan-review" }>
|
||||
): string {
|
||||
const sections: string[] = [
|
||||
"The user is reviewing an implementation plan in Plannotator.",
|
||||
"",
|
||||
"## Plan Under Review",
|
||||
"",
|
||||
truncate(ctx.plan.plan, MAX_PLAN_CHARS),
|
||||
];
|
||||
|
||||
if (ctx.plan.previousPlan) {
|
||||
sections.push("");
|
||||
sections.push("## Previous Plan Version (for reference)");
|
||||
sections.push(truncate(ctx.plan.previousPlan, MAX_PLAN_CHARS / 2));
|
||||
}
|
||||
|
||||
if (ctx.plan.annotations) {
|
||||
sections.push("");
|
||||
sections.push("## User Annotations");
|
||||
sections.push(ctx.plan.annotations);
|
||||
}
|
||||
|
||||
return sections.join("\n");
|
||||
}
|
||||
|
||||
function buildCodeReviewPrompt(
|
||||
ctx: Extract<AIContext, { mode: "code-review" }>
|
||||
): string {
|
||||
const sections: string[] = [
|
||||
"The user is reviewing a code diff in Plannotator.",
|
||||
];
|
||||
|
||||
if (ctx.review.filePath) {
|
||||
sections.push("");
|
||||
sections.push(`## Currently Viewing: ${ctx.review.filePath}`);
|
||||
}
|
||||
|
||||
if (ctx.review.selectedCode) {
|
||||
sections.push("");
|
||||
sections.push("## Selected Code");
|
||||
sections.push("```");
|
||||
sections.push(ctx.review.selectedCode);
|
||||
sections.push("```");
|
||||
}
|
||||
|
||||
sections.push("");
|
||||
sections.push("## Diff");
|
||||
sections.push("```diff");
|
||||
sections.push(truncate(ctx.review.patch, MAX_DIFF_CHARS));
|
||||
sections.push("```");
|
||||
|
||||
if (ctx.review.annotations) {
|
||||
sections.push("");
|
||||
sections.push("## User Annotations");
|
||||
sections.push(ctx.review.annotations);
|
||||
}
|
||||
|
||||
return sections.join("\n");
|
||||
}
|
||||
|
||||
function buildAnnotatePrompt(
|
||||
ctx: Extract<AIContext, { mode: "annotate" }>
|
||||
): string {
|
||||
const sections: string[] = [
|
||||
"The user is annotating a markdown document in Plannotator.",
|
||||
"",
|
||||
`## Document: ${ctx.annotate.filePath}`,
|
||||
"",
|
||||
truncate(ctx.annotate.content, MAX_PLAN_CHARS),
|
||||
];
|
||||
|
||||
if (ctx.annotate.annotations) {
|
||||
sections.push("");
|
||||
sections.push("## User Annotations");
|
||||
sections.push(ctx.annotate.annotations);
|
||||
}
|
||||
|
||||
return sections.join("\n");
|
||||
}
|
||||
309
extensions/plannotator/generated/ai/endpoints.ts
Normal file
309
extensions/plannotator/generated/ai/endpoints.ts
Normal file
@@ -0,0 +1,309 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/endpoints.ts
|
||||
/**
|
||||
* HTTP endpoint handlers for AI features.
|
||||
*
|
||||
* These handlers are provider-agnostic — they work with whatever AIProvider
|
||||
* is registered in the provided ProviderRegistry. They're designed to be
|
||||
* mounted into any Plannotator server (plan review, code review, annotate).
|
||||
*
|
||||
* Endpoints:
|
||||
* POST /api/ai/session — Create or fork an AI session
|
||||
* POST /api/ai/query — Send a message and stream the response
|
||||
* POST /api/ai/abort — Abort the current query
|
||||
* GET /api/ai/sessions — List active sessions
|
||||
* GET /api/ai/capabilities — Check if AI features are available
|
||||
*/
|
||||
|
||||
import type { AIContext, AIMessage, CreateSessionOptions } from "./types.ts";
|
||||
import type { ProviderRegistry } from "./provider.ts";
|
||||
import type { SessionManager } from "./session-manager.ts";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types for request/response
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface CreateSessionRequest {
|
||||
/** The context mode and content for the session. */
|
||||
context: AIContext;
|
||||
/** Instance ID of the provider to use (optional — uses default if omitted). */
|
||||
providerId?: string;
|
||||
/** Optional model override. */
|
||||
model?: string;
|
||||
/** Max agentic turns. */
|
||||
maxTurns?: number;
|
||||
/** Max budget in USD. */
|
||||
maxBudgetUsd?: number;
|
||||
/** Reasoning effort (Codex only). */
|
||||
reasoningEffort?: "minimal" | "low" | "medium" | "high" | "xhigh";
|
||||
}
|
||||
|
||||
export interface QueryRequest {
|
||||
/** The session ID to query. */
|
||||
sessionId: string;
|
||||
/** The user's prompt/question. */
|
||||
prompt: string;
|
||||
/** Optional context update (e.g., new annotations since session was created). */
|
||||
contextUpdate?: string;
|
||||
}
|
||||
|
||||
export interface AbortRequest {
|
||||
/** The session ID to abort. */
|
||||
sessionId: string;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Handler factory
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface AIEndpointDeps {
|
||||
/** Provider registry (one per server or shared). */
|
||||
registry: ProviderRegistry;
|
||||
/** Session manager instance (one per server). */
|
||||
sessionManager: SessionManager;
|
||||
/** Resolve the current working directory for new AI sessions. */
|
||||
getCwd?: () => string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create the route handler map for AI endpoints.
|
||||
*
|
||||
* Usage in a Bun server:
|
||||
* ```ts
|
||||
* const aiHandlers = createAIEndpoints({ registry, sessionManager });
|
||||
*
|
||||
* // In your request handler:
|
||||
* if (url.pathname.startsWith('/api/ai/')) {
|
||||
* const handler = aiHandlers[url.pathname];
|
||||
* if (handler) return handler(req);
|
||||
* }
|
||||
* ```
|
||||
*/
|
||||
export function createAIEndpoints(deps: AIEndpointDeps) {
|
||||
const { registry, sessionManager, getCwd } = deps;
|
||||
|
||||
return {
|
||||
"/api/ai/capabilities": async (_req: Request) => {
|
||||
const defaultEntry = registry.getDefault();
|
||||
const providerDetails = registry.list().map(id => {
|
||||
const p = registry.get(id)!;
|
||||
return {
|
||||
id,
|
||||
name: p.name,
|
||||
capabilities: p.capabilities,
|
||||
models: p.models ?? [],
|
||||
};
|
||||
});
|
||||
return Response.json({
|
||||
available: !!defaultEntry,
|
||||
providers: providerDetails,
|
||||
defaultProvider: defaultEntry?.id ?? null,
|
||||
});
|
||||
},
|
||||
|
||||
"/api/ai/session": async (req: Request) => {
|
||||
if (req.method !== "POST") {
|
||||
return new Response("Method not allowed", { status: 405 });
|
||||
}
|
||||
|
||||
const body = (await req.json()) as CreateSessionRequest;
|
||||
const { context, providerId, model, maxTurns, maxBudgetUsd, reasoningEffort } = body;
|
||||
|
||||
if (!context?.mode) {
|
||||
return Response.json(
|
||||
{ error: "Missing context.mode" },
|
||||
{ status: 400 }
|
||||
);
|
||||
}
|
||||
|
||||
// Resolve provider: by ID, or default
|
||||
const provider = providerId
|
||||
? registry.get(providerId)
|
||||
: registry.getDefault()?.provider;
|
||||
|
||||
if (!provider) {
|
||||
return Response.json(
|
||||
{ error: providerId ? `Provider "${providerId}" not found` : "No AI provider available" },
|
||||
{ status: 503 }
|
||||
);
|
||||
}
|
||||
|
||||
try {
|
||||
const options: CreateSessionOptions = {
|
||||
context,
|
||||
cwd: getCwd?.(),
|
||||
model,
|
||||
maxTurns,
|
||||
maxBudgetUsd,
|
||||
reasoningEffort,
|
||||
};
|
||||
|
||||
// Fork if parent session is provided AND provider supports it.
|
||||
// Providers that can't fork (e.g. Codex) fall back to a fresh
|
||||
// session with the full system prompt — no fake history.
|
||||
const shouldFork = context.parent && provider.capabilities.fork;
|
||||
const session = shouldFork
|
||||
? await provider.forkSession(options)
|
||||
: await provider.createSession(options);
|
||||
|
||||
const entry = sessionManager.track(session, context.mode);
|
||||
|
||||
return Response.json({
|
||||
sessionId: session.id,
|
||||
parentSessionId: session.parentSessionId,
|
||||
mode: context.mode,
|
||||
createdAt: entry.createdAt,
|
||||
});
|
||||
} catch (err) {
|
||||
return Response.json(
|
||||
{
|
||||
error:
|
||||
err instanceof Error ? err.message : "Failed to create session",
|
||||
},
|
||||
{ status: 500 }
|
||||
);
|
||||
}
|
||||
},
|
||||
|
||||
"/api/ai/query": async (req: Request) => {
|
||||
if (req.method !== "POST") {
|
||||
return new Response("Method not allowed", { status: 405 });
|
||||
}
|
||||
|
||||
const body = (await req.json()) as QueryRequest;
|
||||
const { sessionId, prompt, contextUpdate } = body;
|
||||
|
||||
if (!sessionId || !prompt) {
|
||||
return Response.json(
|
||||
{ error: "Missing sessionId or prompt" },
|
||||
{ status: 400 }
|
||||
);
|
||||
}
|
||||
|
||||
const entry = sessionManager.get(sessionId);
|
||||
if (!entry) {
|
||||
return Response.json(
|
||||
{ error: "Session not found" },
|
||||
{ status: 404 }
|
||||
);
|
||||
}
|
||||
|
||||
sessionManager.touch(sessionId);
|
||||
|
||||
// If context update provided, prepend it to the prompt
|
||||
const effectivePrompt = contextUpdate
|
||||
? `[Context update: the user has made changes since this conversation started]\n${contextUpdate}\n\n${prompt}`
|
||||
: prompt;
|
||||
|
||||
// Set label from first query if not already set
|
||||
if (!entry.label) {
|
||||
entry.label = prompt.slice(0, 80);
|
||||
}
|
||||
|
||||
// Stream the response using Server-Sent Events (SSE)
|
||||
const encoder = new TextEncoder();
|
||||
const stream = new ReadableStream({
|
||||
async start(controller) {
|
||||
try {
|
||||
for await (const message of entry.session.query(effectivePrompt)) {
|
||||
const data = JSON.stringify(message);
|
||||
controller.enqueue(
|
||||
encoder.encode(`data: ${data}\n\n`)
|
||||
);
|
||||
}
|
||||
controller.enqueue(encoder.encode("data: [DONE]\n\n"));
|
||||
} catch (err) {
|
||||
const errorMsg: AIMessage = {
|
||||
type: "error",
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
code: "stream_error",
|
||||
};
|
||||
controller.enqueue(
|
||||
encoder.encode(`data: ${JSON.stringify(errorMsg)}\n\n`)
|
||||
);
|
||||
} finally {
|
||||
controller.close();
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
return new Response(stream, {
|
||||
headers: {
|
||||
"Content-Type": "text/event-stream",
|
||||
"Cache-Control": "no-cache",
|
||||
Connection: "keep-alive",
|
||||
},
|
||||
});
|
||||
},
|
||||
|
||||
"/api/ai/abort": async (req: Request) => {
|
||||
if (req.method !== "POST") {
|
||||
return new Response("Method not allowed", { status: 405 });
|
||||
}
|
||||
|
||||
const body = (await req.json()) as AbortRequest;
|
||||
const entry = sessionManager.get(body.sessionId);
|
||||
if (!entry) {
|
||||
return Response.json(
|
||||
{ error: "Session not found" },
|
||||
{ status: 404 }
|
||||
);
|
||||
}
|
||||
|
||||
entry.session.abort();
|
||||
return Response.json({ ok: true });
|
||||
},
|
||||
|
||||
"/api/ai/permission": async (req: Request) => {
|
||||
if (req.method !== "POST") {
|
||||
return new Response("Method not allowed", { status: 405 });
|
||||
}
|
||||
|
||||
const body = (await req.json()) as {
|
||||
sessionId: string;
|
||||
requestId: string;
|
||||
allow: boolean;
|
||||
message?: string;
|
||||
};
|
||||
|
||||
if (!body.sessionId || !body.requestId) {
|
||||
return Response.json(
|
||||
{ error: "Missing sessionId or requestId" },
|
||||
{ status: 400 }
|
||||
);
|
||||
}
|
||||
|
||||
const entry = sessionManager.get(body.sessionId);
|
||||
if (!entry) {
|
||||
return Response.json(
|
||||
{ error: "Session not found" },
|
||||
{ status: 404 }
|
||||
);
|
||||
}
|
||||
|
||||
entry.session.respondToPermission?.(
|
||||
body.requestId,
|
||||
body.allow,
|
||||
body.message
|
||||
);
|
||||
|
||||
return Response.json({ ok: true });
|
||||
},
|
||||
|
||||
"/api/ai/sessions": async (_req: Request) => {
|
||||
const entries = sessionManager.list();
|
||||
return Response.json(
|
||||
entries.map((e) => ({
|
||||
sessionId: e.session.id,
|
||||
mode: e.mode,
|
||||
parentSessionId: e.parentSessionId,
|
||||
createdAt: e.createdAt,
|
||||
lastActiveAt: e.lastActiveAt,
|
||||
isActive: e.session.isActive,
|
||||
label: e.label,
|
||||
}))
|
||||
);
|
||||
},
|
||||
} as const;
|
||||
}
|
||||
|
||||
export type AIEndpoints = ReturnType<typeof createAIEndpoints>;
|
||||
106
extensions/plannotator/generated/ai/index.ts
Normal file
106
extensions/plannotator/generated/ai/index.ts
Normal file
@@ -0,0 +1,106 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/index.ts
|
||||
/**
|
||||
* @plannotator/ai — AI provider layer for Plannotator.
|
||||
*
|
||||
* This package provides the backbone for AI-powered features (inline chat,
|
||||
* plan Q&A, code review assistance) across all Plannotator surfaces.
|
||||
*
|
||||
* Architecture:
|
||||
*
|
||||
* ┌─────────────────┐ ┌──────────────┐
|
||||
* │ Plan Review UI │────▶│ │
|
||||
* ├─────────────────┤ │ AI Endpoints │──▶ SSE stream
|
||||
* │ Code Review UI │────▶│ (HTTP) │
|
||||
* ├─────────────────┤ │ │
|
||||
* │ Annotate UI │────▶└──────┬───────┘
|
||||
* └─────────────────┘ │
|
||||
* ▼
|
||||
* ┌────────────────┐
|
||||
* │ Session Manager │
|
||||
* └────────┬───────┘
|
||||
* │
|
||||
* ┌────────▼───────┐
|
||||
* │ AIProvider │ (abstract)
|
||||
* └────────┬───────┘
|
||||
* │
|
||||
* ┌─────────────┼──────────────┐
|
||||
* ▼ ▼ ▼
|
||||
* ┌──────────────┐ ┌──────────┐ ┌───────────┐
|
||||
* │ Claude Agent │ │ OpenCode │ │ Future │
|
||||
* │ SDK Provider │ │ Provider │ │ Providers │
|
||||
* └──────────────┘ └──────────┘ └───────────┘
|
||||
*
|
||||
* Quick start:
|
||||
*
|
||||
* ```ts
|
||||
* import "@plannotator/ai/providers/claude-agent-sdk";
|
||||
* import { ProviderRegistry, createProvider, createAIEndpoints, SessionManager } from "@plannotator/ai";
|
||||
*
|
||||
* // 1. Create a registry and provider
|
||||
* const registry = new ProviderRegistry();
|
||||
* const provider = await createProvider({ type: "claude-agent-sdk", cwd: process.cwd() });
|
||||
* registry.register(provider);
|
||||
*
|
||||
* // 2. Create endpoints and session manager
|
||||
* const sessionManager = new SessionManager();
|
||||
* const aiEndpoints = createAIEndpoints({ registry, sessionManager });
|
||||
*
|
||||
* // 3. Mount endpoints in your Bun server
|
||||
* // aiEndpoints["/api/ai/query"](request) → SSE Response
|
||||
* ```
|
||||
*/
|
||||
|
||||
// Types
|
||||
export type {
|
||||
AIProvider,
|
||||
AIProviderCapabilities,
|
||||
AIProviderConfig,
|
||||
AISession,
|
||||
AIMessage,
|
||||
AITextMessage,
|
||||
AITextDeltaMessage,
|
||||
AIToolUseMessage,
|
||||
AIToolResultMessage,
|
||||
AIErrorMessage,
|
||||
AIResultMessage,
|
||||
AIPermissionRequestMessage,
|
||||
AIUnknownMessage,
|
||||
AIContext,
|
||||
AIContextMode,
|
||||
PlanContext,
|
||||
CodeReviewContext,
|
||||
AnnotateContext,
|
||||
ParentSession,
|
||||
CreateSessionOptions,
|
||||
ClaudeAgentSDKConfig,
|
||||
CodexSDKConfig,
|
||||
PiSDKConfig,
|
||||
OpenCodeConfig,
|
||||
} from "./types.ts";
|
||||
|
||||
// Provider registry
|
||||
export {
|
||||
ProviderRegistry,
|
||||
registerProviderFactory,
|
||||
createProvider,
|
||||
} from "./provider.ts";
|
||||
|
||||
// Context builders
|
||||
export { buildSystemPrompt, buildForkPreamble, buildEffectivePrompt } from "./context.ts";
|
||||
|
||||
// Base session
|
||||
export { BaseSession } from "./base-session.ts";
|
||||
|
||||
// Session manager
|
||||
export { SessionManager } from "./session-manager.ts";
|
||||
export type { SessionEntry, SessionManagerOptions } from "./session-manager.ts";
|
||||
|
||||
// HTTP endpoints
|
||||
export { createAIEndpoints } from "./endpoints.ts";
|
||||
export type {
|
||||
AIEndpoints,
|
||||
AIEndpointDeps,
|
||||
CreateSessionRequest,
|
||||
QueryRequest,
|
||||
AbortRequest,
|
||||
} from "./endpoints.ts";
|
||||
104
extensions/plannotator/generated/ai/provider.ts
Normal file
104
extensions/plannotator/generated/ai/provider.ts
Normal file
@@ -0,0 +1,104 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/provider.ts
|
||||
/**
|
||||
* Provider registry — manages AI provider instances.
|
||||
*
|
||||
* Supports multiple instances of the same provider type (e.g., two Claude
|
||||
* Agent SDK providers with different configs) keyed by instance ID.
|
||||
*
|
||||
* Each server (plan review, code review, annotate) should create its own
|
||||
* ProviderRegistry or share one — no module-level global state.
|
||||
*/
|
||||
|
||||
import type { AIProvider, AIProviderConfig } from "./types.ts";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Factory registry (global — factories are stateless type→constructor maps)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
type ProviderFactory = (config: AIProviderConfig) => Promise<AIProvider>;
|
||||
const factories = new Map<string, ProviderFactory>();
|
||||
|
||||
/** Register a factory function for a provider type. */
|
||||
export function registerProviderFactory(
|
||||
type: string,
|
||||
factory: ProviderFactory
|
||||
): void {
|
||||
factories.set(type, factory);
|
||||
}
|
||||
|
||||
/** Create a provider from config using a registered factory. Does NOT auto-register. */
|
||||
export async function createProvider(
|
||||
config: AIProviderConfig
|
||||
): Promise<AIProvider> {
|
||||
const factory = factories.get(config.type);
|
||||
if (!factory) {
|
||||
throw new Error(
|
||||
`No AI provider factory registered for type "${config.type}". ` +
|
||||
`Available: ${[...factories.keys()].join(", ") || "(none)"}`
|
||||
);
|
||||
}
|
||||
return factory(config);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Registry
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class ProviderRegistry {
|
||||
private instances = new Map<string, AIProvider>();
|
||||
|
||||
/**
|
||||
* Register a provider instance under an ID.
|
||||
* If no instanceId is provided, uses `provider.name`.
|
||||
* Returns the instanceId used.
|
||||
*/
|
||||
register(provider: AIProvider, instanceId?: string): string {
|
||||
const id = instanceId ?? provider.name;
|
||||
this.instances.set(id, provider);
|
||||
return id;
|
||||
}
|
||||
|
||||
/** Get a provider by instance ID. */
|
||||
get(instanceId: string): AIProvider | undefined {
|
||||
return this.instances.get(instanceId);
|
||||
}
|
||||
|
||||
/** Get the first registered provider (convenience for single-provider setups). */
|
||||
getDefault(): { id: string; provider: AIProvider } | undefined {
|
||||
const first = this.instances.entries().next();
|
||||
if (first.done) return undefined;
|
||||
return { id: first.value[0], provider: first.value[1] };
|
||||
}
|
||||
|
||||
/** Get all instances of a given provider type (by provider.name). */
|
||||
getByType(typeName: string): AIProvider[] {
|
||||
return [...this.instances.values()].filter((p) => p.name === typeName);
|
||||
}
|
||||
|
||||
/** List all instance IDs. */
|
||||
list(): string[] {
|
||||
return [...this.instances.keys()];
|
||||
}
|
||||
|
||||
/** Dispose and remove a single instance. No-op if not found. */
|
||||
dispose(instanceId: string): void {
|
||||
const provider = this.instances.get(instanceId);
|
||||
if (provider) {
|
||||
provider.dispose();
|
||||
this.instances.delete(instanceId);
|
||||
}
|
||||
}
|
||||
|
||||
/** Dispose all providers and clear the registry. */
|
||||
disposeAll(): void {
|
||||
for (const provider of this.instances.values()) {
|
||||
provider.dispose();
|
||||
}
|
||||
this.instances.clear();
|
||||
}
|
||||
|
||||
/** Number of registered instances. */
|
||||
get size(): number {
|
||||
return this.instances.size;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,445 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/providers/claude-agent-sdk.ts
|
||||
/**
|
||||
* Claude Agent SDK provider — the first concrete AIProvider implementation.
|
||||
*
|
||||
* Uses @anthropic-ai/claude-agent-sdk to create sessions that can:
|
||||
* - Start fresh with Plannotator context as the system prompt
|
||||
* - Fork from a parent Claude Code session (preserving full history)
|
||||
* - Resume a previous Plannotator inline chat session
|
||||
* - Stream text deltas back to the UI in real time
|
||||
*
|
||||
* Sessions are read-only by default (tools limited to Read, Glob, Grep)
|
||||
* to keep inline chat safe and cost-bounded.
|
||||
*/
|
||||
|
||||
import { buildSystemPrompt, buildForkPreamble, buildEffectivePrompt } from "../context.ts";
|
||||
import { BaseSession } from "../base-session.ts";
|
||||
import type {
|
||||
AIProvider,
|
||||
AIProviderCapabilities,
|
||||
AISession,
|
||||
AIMessage,
|
||||
CreateSessionOptions,
|
||||
ClaudeAgentSDKConfig,
|
||||
} from "../types.ts";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Constants
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const PROVIDER_NAME = "claude-agent-sdk";
|
||||
|
||||
/** Default read-only tools for inline chat. */
|
||||
const DEFAULT_ALLOWED_TOOLS = ["Read", "Glob", "Grep", "WebSearch"];
|
||||
|
||||
const DEFAULT_MAX_TURNS = 99;
|
||||
const DEFAULT_MODEL = "claude-sonnet-4-6";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// SDK query options — typed to catch typos at compile time
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface ClaudeSDKQueryOptions {
|
||||
model: string;
|
||||
maxTurns: number;
|
||||
allowedTools: string[];
|
||||
cwd: string;
|
||||
abortController: AbortController;
|
||||
includePartialMessages: boolean;
|
||||
persistSession: boolean;
|
||||
maxBudgetUsd?: number;
|
||||
systemPrompt?: string | { type: "preset"; preset: string; append?: string };
|
||||
resume?: string;
|
||||
forkSession?: boolean;
|
||||
permissionMode?: ClaudeAgentSDKConfig['permissionMode'];
|
||||
allowDangerouslySkipPermissions?: boolean;
|
||||
pathToClaudeCodeExecutable?: string;
|
||||
settingSources?: string[];
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Provider
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class ClaudeAgentSDKProvider implements AIProvider {
|
||||
readonly name = PROVIDER_NAME;
|
||||
readonly capabilities: AIProviderCapabilities = {
|
||||
fork: true,
|
||||
resume: true,
|
||||
streaming: true,
|
||||
tools: true,
|
||||
};
|
||||
readonly models = [
|
||||
{ id: 'claude-sonnet-4-6', label: 'Sonnet 4.6', default: true },
|
||||
{ id: 'claude-sonnet-4-6[1m]', label: 'Sonnet 4.6 (1M)' },
|
||||
{ id: 'claude-opus-4-7', label: 'Opus 4.7' },
|
||||
{ id: 'claude-opus-4-7[1m]', label: 'Opus 4.7 (1M)' },
|
||||
{ id: 'claude-opus-4-6', label: 'Opus 4.6' },
|
||||
{ id: 'claude-opus-4-6[1m]', label: 'Opus 4.6 (1M)' },
|
||||
{ id: 'claude-haiku-4-5', label: 'Haiku 4.5' },
|
||||
] as const;
|
||||
|
||||
private config: ClaudeAgentSDKConfig;
|
||||
|
||||
constructor(config: ClaudeAgentSDKConfig) {
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
async createSession(options: CreateSessionOptions): Promise<AISession> {
|
||||
return new ClaudeAgentSDKSession({
|
||||
...this.baseConfig(options),
|
||||
systemPrompt: buildSystemPrompt(options.context),
|
||||
cwd: options.cwd ?? this.config.cwd ?? process.cwd(),
|
||||
parentSessionId: null,
|
||||
forkFromSession: null,
|
||||
});
|
||||
}
|
||||
|
||||
async forkSession(options: CreateSessionOptions): Promise<AISession> {
|
||||
const parent = options.context.parent;
|
||||
if (!parent) {
|
||||
throw new Error(
|
||||
"Cannot fork: no parent session provided in context. " +
|
||||
"Use createSession() for standalone sessions."
|
||||
);
|
||||
}
|
||||
|
||||
return new ClaudeAgentSDKSession({
|
||||
...this.baseConfig(options),
|
||||
systemPrompt: null,
|
||||
forkPreamble: buildForkPreamble(options.context),
|
||||
cwd: parent.cwd,
|
||||
parentSessionId: parent.sessionId,
|
||||
forkFromSession: parent.sessionId,
|
||||
});
|
||||
}
|
||||
|
||||
async resumeSession(sessionId: string): Promise<AISession> {
|
||||
return new ClaudeAgentSDKSession({
|
||||
...this.baseConfig(),
|
||||
systemPrompt: null,
|
||||
cwd: this.config.cwd ?? process.cwd(),
|
||||
parentSessionId: null,
|
||||
forkFromSession: null,
|
||||
resumeSessionId: sessionId,
|
||||
});
|
||||
}
|
||||
|
||||
dispose(): void {
|
||||
// No persistent resources to clean up
|
||||
}
|
||||
|
||||
private baseConfig(options?: CreateSessionOptions) {
|
||||
return {
|
||||
model: options?.model ?? this.config.model ?? DEFAULT_MODEL,
|
||||
maxTurns: options?.maxTurns ?? DEFAULT_MAX_TURNS,
|
||||
maxBudgetUsd: options?.maxBudgetUsd,
|
||||
allowedTools: this.config.allowedTools ?? DEFAULT_ALLOWED_TOOLS,
|
||||
permissionMode: this.config.permissionMode ?? "default",
|
||||
claudeExecutablePath: this.config.claudeExecutablePath,
|
||||
settingSources: this.config.settingSources ?? ['user', 'project'],
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// SDK import cache — resolve once, reuse across all queries
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// biome-ignore lint/suspicious/noExplicitAny: SDK types resolved at runtime via dynamic import
|
||||
let sdkQueryFn: ((...args: any[]) => any) | null = null;
|
||||
|
||||
async function getSDKQuery() {
|
||||
if (!sdkQueryFn) {
|
||||
const sdk = await import("@anthropic-ai/claude-agent-sdk");
|
||||
sdkQueryFn = sdk.query;
|
||||
}
|
||||
return sdkQueryFn!;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Session
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface SessionConfig {
|
||||
systemPrompt: string | null;
|
||||
forkPreamble?: string;
|
||||
model: string;
|
||||
maxTurns: number;
|
||||
maxBudgetUsd?: number;
|
||||
allowedTools: string[];
|
||||
permissionMode: ClaudeAgentSDKConfig['permissionMode'];
|
||||
cwd: string;
|
||||
parentSessionId: string | null;
|
||||
forkFromSession: string | null;
|
||||
resumeSessionId?: string;
|
||||
claudeExecutablePath?: string;
|
||||
settingSources?: string[];
|
||||
}
|
||||
|
||||
class ClaudeAgentSDKSession extends BaseSession {
|
||||
private config: SessionConfig;
|
||||
/** Active Query object — needed to send control responses (permission decisions) */
|
||||
private _activeQuery: { streamInput: (iter: AsyncIterable<unknown>) => Promise<void> } | null = null;
|
||||
|
||||
constructor(config: SessionConfig) {
|
||||
super({
|
||||
parentSessionId: config.parentSessionId,
|
||||
initialId: config.resumeSessionId,
|
||||
});
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
async *query(prompt: string): AsyncIterable<AIMessage> {
|
||||
const started = this.startQuery();
|
||||
if (!started) { yield BaseSession.BUSY_ERROR; return; }
|
||||
const { gen } = started;
|
||||
|
||||
try {
|
||||
const queryFn = await getSDKQuery();
|
||||
|
||||
const queryPrompt = buildEffectivePrompt(
|
||||
prompt,
|
||||
this.config.forkPreamble ?? null,
|
||||
this._firstQuerySent,
|
||||
);
|
||||
const options = this.buildQueryOptions();
|
||||
|
||||
const stream = queryFn({ prompt: queryPrompt, options }) as
|
||||
AsyncIterable<Record<string, unknown>> & { streamInput: (iter: AsyncIterable<unknown>) => Promise<void> };
|
||||
this._activeQuery = stream;
|
||||
|
||||
this._firstQuerySent = true;
|
||||
|
||||
for await (const message of stream) {
|
||||
const mapped = mapSDKMessage(message);
|
||||
|
||||
// Capture the real session ID from the init message
|
||||
if (
|
||||
!this._resolvedId &&
|
||||
"session_id" in message &&
|
||||
typeof message.session_id === "string" &&
|
||||
message.session_id
|
||||
) {
|
||||
this.resolveId(message.session_id);
|
||||
}
|
||||
|
||||
for (const msg of mapped) {
|
||||
yield msg;
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
yield {
|
||||
type: "error",
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
code: "provider_error",
|
||||
};
|
||||
} finally {
|
||||
this.endQuery(gen);
|
||||
this._activeQuery = null;
|
||||
}
|
||||
}
|
||||
|
||||
abort(): void {
|
||||
this._activeQuery = null;
|
||||
super.abort();
|
||||
}
|
||||
|
||||
respondToPermission(requestId: string, allow: boolean, message?: string): void {
|
||||
if (!this._activeQuery || !this._activeQuery.streamInput) return;
|
||||
|
||||
const response = allow
|
||||
? { type: 'control_response', response: { subtype: 'success', request_id: requestId, response: { behavior: 'allow' } } }
|
||||
: { type: 'control_response', response: { subtype: 'success', request_id: requestId, response: { behavior: 'deny', message: message ?? 'User denied this action' } } };
|
||||
|
||||
this._activeQuery.streamInput(
|
||||
(async function* () { yield response; })()
|
||||
).catch(() => {});
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Internal
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
private buildQueryOptions(): ClaudeSDKQueryOptions {
|
||||
const opts: ClaudeSDKQueryOptions = {
|
||||
model: this.config.model,
|
||||
maxTurns: this.config.maxTurns,
|
||||
allowedTools: this.config.allowedTools,
|
||||
cwd: this.config.cwd,
|
||||
abortController: this._currentAbort!,
|
||||
includePartialMessages: true,
|
||||
persistSession: true,
|
||||
...(this.config.claudeExecutablePath && {
|
||||
pathToClaudeCodeExecutable: this.config.claudeExecutablePath,
|
||||
}),
|
||||
...(this.config.settingSources && {
|
||||
settingSources: this.config.settingSources,
|
||||
}),
|
||||
};
|
||||
|
||||
if (this.config.maxBudgetUsd) {
|
||||
opts.maxBudgetUsd = this.config.maxBudgetUsd;
|
||||
}
|
||||
|
||||
// After the first query resolves a real session ID, all subsequent
|
||||
// queries must resume that session to continue the conversation.
|
||||
if (this._resolvedId) {
|
||||
opts.resume = this._resolvedId;
|
||||
return this.applyPermissionMode(opts);
|
||||
}
|
||||
|
||||
// First query: use Claude Code's built-in prompt with our context appended
|
||||
if (this.config.systemPrompt) {
|
||||
opts.systemPrompt = {
|
||||
type: "preset",
|
||||
preset: "claude_code",
|
||||
append: this.config.systemPrompt,
|
||||
};
|
||||
}
|
||||
|
||||
if (this.config.forkFromSession) {
|
||||
opts.resume = this.config.forkFromSession;
|
||||
opts.forkSession = true;
|
||||
}
|
||||
|
||||
if (this.config.resumeSessionId) {
|
||||
opts.resume = this.config.resumeSessionId;
|
||||
}
|
||||
|
||||
return this.applyPermissionMode(opts);
|
||||
}
|
||||
|
||||
private applyPermissionMode(opts: ClaudeSDKQueryOptions): ClaudeSDKQueryOptions {
|
||||
if (this.config.permissionMode === "bypassPermissions") {
|
||||
opts.permissionMode = "bypassPermissions";
|
||||
opts.allowDangerouslySkipPermissions = true;
|
||||
} else if (this.config.permissionMode === "plan") {
|
||||
opts.permissionMode = "plan";
|
||||
}
|
||||
return opts;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Message mapping
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Map an SDK message to one or more AIMessages.
|
||||
*
|
||||
* An SDK assistant message can contain both text and tool_use content blocks
|
||||
* in a single response. We emit each block as a separate AIMessage so no
|
||||
* content is dropped.
|
||||
*/
|
||||
function mapSDKMessage(msg: Record<string, unknown>): AIMessage[] {
|
||||
const type = msg.type as string;
|
||||
|
||||
switch (type) {
|
||||
case "assistant": {
|
||||
const message = msg.message as Record<string, unknown> | undefined;
|
||||
if (!message) return [{ type: "unknown", raw: msg }];
|
||||
const content = message.content as Array<Record<string, unknown>>;
|
||||
if (!content) return [{ type: "unknown", raw: msg }];
|
||||
|
||||
const messages: AIMessage[] = [];
|
||||
const textParts: string[] = [];
|
||||
|
||||
for (const block of content) {
|
||||
if (block.type === "text" && typeof block.text === "string") {
|
||||
textParts.push(block.text);
|
||||
} else if (block.type === "tool_use") {
|
||||
// Flush accumulated text before the tool_use block
|
||||
if (textParts.length > 0) {
|
||||
messages.push({ type: "text", text: textParts.join("") });
|
||||
textParts.length = 0;
|
||||
}
|
||||
messages.push({
|
||||
type: "tool_use",
|
||||
toolName: block.name as string,
|
||||
toolInput: block.input as Record<string, unknown>,
|
||||
toolUseId: block.id as string,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Flush any remaining text after the last block
|
||||
if (textParts.length > 0) {
|
||||
messages.push({ type: "text", text: textParts.join("") });
|
||||
}
|
||||
|
||||
return messages.length > 0 ? messages : [{ type: "unknown", raw: msg }];
|
||||
}
|
||||
|
||||
case "stream_event": {
|
||||
const event = msg.event as Record<string, unknown> | undefined;
|
||||
if (!event) return [{ type: "unknown", raw: msg }];
|
||||
const eventType = event.type as string;
|
||||
|
||||
if (eventType === "content_block_delta") {
|
||||
const delta = event.delta as Record<string, unknown>;
|
||||
if (delta?.type === "text_delta" && typeof delta.text === "string") {
|
||||
return [{ type: "text_delta", delta: delta.text }];
|
||||
}
|
||||
}
|
||||
return [{ type: "unknown", raw: msg }];
|
||||
}
|
||||
|
||||
case "user": {
|
||||
// SDK wraps tool results in SDKUserMessage (type: "user")
|
||||
if (msg.tool_use_result != null) {
|
||||
return [{
|
||||
type: "tool_result",
|
||||
result: typeof msg.tool_use_result === "string"
|
||||
? msg.tool_use_result
|
||||
: JSON.stringify(msg.tool_use_result),
|
||||
}];
|
||||
}
|
||||
return [{ type: "unknown", raw: msg }];
|
||||
}
|
||||
|
||||
case "control_request": {
|
||||
const request = msg.request as Record<string, unknown> | undefined;
|
||||
if (request?.subtype === "can_use_tool") {
|
||||
return [{
|
||||
type: "permission_request",
|
||||
requestId: msg.request_id as string,
|
||||
toolName: request.tool_name as string,
|
||||
toolInput: (request.input as Record<string, unknown>) ?? {},
|
||||
title: request.title as string | undefined,
|
||||
displayName: request.display_name as string | undefined,
|
||||
description: request.description as string | undefined,
|
||||
toolUseId: request.tool_use_id as string,
|
||||
}];
|
||||
}
|
||||
return [{ type: "unknown", raw: msg }];
|
||||
}
|
||||
|
||||
case "result": {
|
||||
const sessionId = (msg.session_id as string) ?? "";
|
||||
const subtype = msg.subtype as string;
|
||||
return [{
|
||||
type: "result",
|
||||
sessionId,
|
||||
success: subtype === "success",
|
||||
result: (msg.result as string) ?? undefined,
|
||||
costUsd: msg.total_cost_usd as number | undefined,
|
||||
turns: msg.num_turns as number | undefined,
|
||||
}];
|
||||
}
|
||||
|
||||
default:
|
||||
return [{ type: "unknown", raw: msg }];
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Factory registration
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
import { registerProviderFactory } from "../provider.ts";
|
||||
|
||||
registerProviderFactory(
|
||||
PROVIDER_NAME,
|
||||
async (config) => new ClaudeAgentSDKProvider(config as ClaudeAgentSDKConfig)
|
||||
);
|
||||
431
extensions/plannotator/generated/ai/providers/codex-sdk.ts
Normal file
431
extensions/plannotator/generated/ai/providers/codex-sdk.ts
Normal file
@@ -0,0 +1,431 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/providers/codex-sdk.ts
|
||||
/**
|
||||
* Codex SDK provider — bridges Plannotator's AI layer with OpenAI's Codex agent.
|
||||
*
|
||||
* Uses @openai/codex-sdk to create sessions that can:
|
||||
* - Start fresh with Plannotator context as the system prompt
|
||||
* - Fake-fork from a parent session (fresh thread + preamble, no real history)
|
||||
* - Resume a previous thread by ID
|
||||
* - Stream text deltas back to the UI in real time
|
||||
*
|
||||
* Sessions default to read-only sandbox mode for safety in inline chat.
|
||||
*/
|
||||
|
||||
import { buildSystemPrompt, buildEffectivePrompt } from "../context.ts";
|
||||
import { BaseSession } from "../base-session.ts";
|
||||
import type {
|
||||
AIProvider,
|
||||
AIProviderCapabilities,
|
||||
AISession,
|
||||
AIMessage,
|
||||
CreateSessionOptions,
|
||||
CodexSDKConfig,
|
||||
} from "../types.ts";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Constants
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const PROVIDER_NAME = "codex-sdk";
|
||||
const DEFAULT_MODEL = "gpt-5.4";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Provider
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class CodexSDKProvider implements AIProvider {
|
||||
readonly name = PROVIDER_NAME;
|
||||
readonly capabilities: AIProviderCapabilities = {
|
||||
fork: false, // No real fork — faked with fresh thread + preamble
|
||||
resume: true,
|
||||
streaming: true,
|
||||
tools: true,
|
||||
};
|
||||
readonly models = [
|
||||
{ id: 'gpt-5.5', label: 'GPT-5.5' },
|
||||
{ id: 'gpt-5.4', label: 'GPT-5.4', default: true },
|
||||
{ id: 'gpt-5.4-mini', label: 'GPT-5.4 Mini' },
|
||||
{ id: 'gpt-5.3-codex', label: 'GPT-5.3 Codex' },
|
||||
{ id: 'gpt-5.3-codex-spark', label: 'GPT-5.3 Codex Spark' },
|
||||
{ id: 'gpt-5.2-codex', label: 'GPT-5.2 Codex' },
|
||||
{ id: 'gpt-5.2', label: 'GPT-5.2' },
|
||||
] as const;
|
||||
|
||||
private config: CodexSDKConfig;
|
||||
|
||||
constructor(config: CodexSDKConfig) {
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
async createSession(options: CreateSessionOptions): Promise<AISession> {
|
||||
return new CodexSDKSession({
|
||||
...this.baseConfig(options),
|
||||
systemPrompt: buildSystemPrompt(options.context),
|
||||
cwd: options.cwd ?? this.config.cwd ?? process.cwd(),
|
||||
parentSessionId: null,
|
||||
});
|
||||
}
|
||||
|
||||
async forkSession(_options: CreateSessionOptions): Promise<AISession> {
|
||||
throw new Error(
|
||||
"Codex does not support session forking. " +
|
||||
"The endpoint layer should fall back to createSession()."
|
||||
);
|
||||
}
|
||||
|
||||
async resumeSession(sessionId: string): Promise<AISession> {
|
||||
return new CodexSDKSession({
|
||||
...this.baseConfig(),
|
||||
systemPrompt: null,
|
||||
cwd: this.config.cwd ?? process.cwd(),
|
||||
parentSessionId: null,
|
||||
resumeThreadId: sessionId,
|
||||
});
|
||||
}
|
||||
|
||||
dispose(): void {
|
||||
// No persistent resources to clean up
|
||||
}
|
||||
|
||||
private baseConfig(options?: CreateSessionOptions) {
|
||||
return {
|
||||
model: options?.model ?? this.config.model ?? DEFAULT_MODEL,
|
||||
maxTurns: options?.maxTurns ?? 99,
|
||||
sandboxMode: this.config.sandboxMode ?? "read-only" as const,
|
||||
codexExecutablePath: this.config.codexExecutablePath,
|
||||
reasoningEffort: options?.reasoningEffort,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// SDK import cache — resolve once, reuse across all sessions
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// biome-ignore lint/suspicious/noExplicitAny: SDK type not available at compile time
|
||||
let CodexClass: any = null;
|
||||
|
||||
async function getCodexClass() {
|
||||
if (!CodexClass) {
|
||||
// biome-ignore lint/suspicious/noExplicitAny: SDK exports vary between versions
|
||||
const mod = await import("@openai/codex-sdk") as any;
|
||||
CodexClass = mod.default ?? mod.Codex;
|
||||
}
|
||||
return CodexClass;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Session
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface SessionConfig {
|
||||
systemPrompt: string | null;
|
||||
model: string;
|
||||
maxTurns: number;
|
||||
sandboxMode: "read-only" | "workspace-write" | "danger-full-access";
|
||||
cwd: string;
|
||||
parentSessionId: string | null;
|
||||
resumeThreadId?: string;
|
||||
codexExecutablePath?: string;
|
||||
reasoningEffort?: "minimal" | "low" | "medium" | "high" | "xhigh";
|
||||
}
|
||||
|
||||
class CodexSDKSession extends BaseSession {
|
||||
private config: SessionConfig;
|
||||
// biome-ignore lint/suspicious/noExplicitAny: SDK types not available at compile time
|
||||
private _codexInstance: any = null;
|
||||
// biome-ignore lint/suspicious/noExplicitAny: SDK types not available at compile time
|
||||
private _thread: any = null;
|
||||
/** Tracks cumulative text length per item for delta extraction. */
|
||||
private _itemTextOffsets = new Map<string, number>();
|
||||
|
||||
constructor(config: SessionConfig) {
|
||||
super({
|
||||
parentSessionId: config.parentSessionId,
|
||||
initialId: config.resumeThreadId,
|
||||
});
|
||||
this.config = config;
|
||||
// If resuming, treat the thread ID as already resolved
|
||||
if (config.resumeThreadId) {
|
||||
this._resolvedId = config.resumeThreadId;
|
||||
}
|
||||
}
|
||||
|
||||
async *query(prompt: string): AsyncIterable<AIMessage> {
|
||||
const started = this.startQuery();
|
||||
if (!started) { yield BaseSession.BUSY_ERROR; return; }
|
||||
const { gen, signal } = started;
|
||||
|
||||
this._itemTextOffsets.clear();
|
||||
|
||||
try {
|
||||
const Codex = await getCodexClass();
|
||||
|
||||
// Lazy-create the Codex instance
|
||||
if (!this._codexInstance) {
|
||||
this._codexInstance = new Codex({
|
||||
...(this.config.codexExecutablePath && { codexPathOverride: this.config.codexExecutablePath }),
|
||||
});
|
||||
}
|
||||
|
||||
// Lazy-create or resume the thread
|
||||
if (!this._thread) {
|
||||
if (this.config.resumeThreadId) {
|
||||
this._thread = this._codexInstance.resumeThread(this.config.resumeThreadId, {
|
||||
model: this.config.model,
|
||||
workingDirectory: this.config.cwd,
|
||||
sandboxMode: this.config.sandboxMode,
|
||||
...(this.config.reasoningEffort && { modelReasoningEffort: this.config.reasoningEffort }),
|
||||
});
|
||||
} else {
|
||||
this._thread = this._codexInstance.startThread({
|
||||
model: this.config.model,
|
||||
workingDirectory: this.config.cwd,
|
||||
sandboxMode: this.config.sandboxMode,
|
||||
...(this.config.reasoningEffort && { modelReasoningEffort: this.config.reasoningEffort }),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const effectivePrompt = buildEffectivePrompt(
|
||||
prompt,
|
||||
this.config.systemPrompt,
|
||||
this._firstQuerySent,
|
||||
);
|
||||
const streamed = await this._thread.runStreamed(effectivePrompt, {
|
||||
signal,
|
||||
});
|
||||
|
||||
this._firstQuerySent = true;
|
||||
let turnFailed = false;
|
||||
|
||||
for await (const event of streamed.events) {
|
||||
// ID resolution from thread.started
|
||||
if (
|
||||
!this._resolvedId &&
|
||||
event.type === "thread.started" &&
|
||||
typeof event.thread_id === "string"
|
||||
) {
|
||||
this.resolveId(event.thread_id);
|
||||
}
|
||||
|
||||
if (event.type === "turn.failed") {
|
||||
turnFailed = true;
|
||||
}
|
||||
|
||||
const mapped = mapCodexEvent(event, this._itemTextOffsets);
|
||||
for (const msg of mapped) {
|
||||
yield msg;
|
||||
}
|
||||
}
|
||||
|
||||
// Emit synthetic result after stream ends
|
||||
if (!turnFailed) {
|
||||
yield {
|
||||
type: "result",
|
||||
sessionId: this.id,
|
||||
success: true,
|
||||
};
|
||||
}
|
||||
} catch (err) {
|
||||
yield {
|
||||
type: "error",
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
code: "provider_error",
|
||||
};
|
||||
} finally {
|
||||
this.endQuery(gen);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Event mapping
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Map a Codex SDK ThreadEvent to one or more AIMessages.
|
||||
*
|
||||
* The itemTextOffsets map tracks cumulative text length per item ID
|
||||
* so we can extract true deltas from the cumulative text in item.updated events.
|
||||
*/
|
||||
function mapCodexEvent(
|
||||
event: Record<string, unknown>,
|
||||
itemTextOffsets: Map<string, number>,
|
||||
): AIMessage[] {
|
||||
const eventType = event.type as string;
|
||||
|
||||
switch (eventType) {
|
||||
case "thread.started":
|
||||
case "turn.started":
|
||||
return [];
|
||||
|
||||
case "turn.completed":
|
||||
return [];
|
||||
|
||||
case "turn.failed": {
|
||||
const error = event.error as Record<string, unknown> | undefined;
|
||||
return [{
|
||||
type: "error",
|
||||
error: (error?.message as string) ?? "Turn failed",
|
||||
code: "turn_failed",
|
||||
}];
|
||||
}
|
||||
|
||||
case "error":
|
||||
return [{
|
||||
type: "error",
|
||||
error: (event.message as string) ?? "Unknown error",
|
||||
code: "codex_error",
|
||||
}];
|
||||
|
||||
case "item.started":
|
||||
case "item.updated":
|
||||
case "item.completed":
|
||||
return mapCodexItem(event, itemTextOffsets);
|
||||
|
||||
default:
|
||||
return [{ type: "unknown", raw: event }];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Map item-level events to AIMessages.
|
||||
*/
|
||||
function mapCodexItem(
|
||||
event: Record<string, unknown>,
|
||||
itemTextOffsets: Map<string, number>,
|
||||
): AIMessage[] {
|
||||
const item = event.item as Record<string, unknown>;
|
||||
if (!item) return [{ type: "unknown", raw: event }];
|
||||
|
||||
const eventType = event.type as string;
|
||||
const itemType = item.type as string;
|
||||
const itemId = (item.id as string) ?? "";
|
||||
const isStarted = eventType === "item.started";
|
||||
const isCompleted = eventType === "item.completed";
|
||||
|
||||
switch (itemType) {
|
||||
case "agent_message": {
|
||||
const text = (item.text as string) ?? "";
|
||||
|
||||
if (isStarted) {
|
||||
// Reset offset tracking for this item
|
||||
itemTextOffsets.set(itemId, 0);
|
||||
return [];
|
||||
}
|
||||
|
||||
if (isCompleted) {
|
||||
// Emit final complete text
|
||||
itemTextOffsets.delete(itemId);
|
||||
return text ? [{ type: "text", text }] : [];
|
||||
}
|
||||
|
||||
// item.updated — extract delta from cumulative text
|
||||
const prevOffset = itemTextOffsets.get(itemId) ?? 0;
|
||||
if (text.length > prevOffset) {
|
||||
const delta = text.slice(prevOffset);
|
||||
itemTextOffsets.set(itemId, text.length);
|
||||
return [{ type: "text_delta", delta }];
|
||||
}
|
||||
return [];
|
||||
}
|
||||
|
||||
case "command_execution": {
|
||||
const messages: AIMessage[] = [];
|
||||
if (isStarted) {
|
||||
messages.push({
|
||||
type: "tool_use",
|
||||
toolName: "Bash",
|
||||
toolInput: { command: item.command as string },
|
||||
toolUseId: itemId,
|
||||
});
|
||||
}
|
||||
if (isCompleted) {
|
||||
const output = (item.aggregated_output as string) ?? "";
|
||||
const exitCode = item.exit_code as number | undefined;
|
||||
messages.push({
|
||||
type: "tool_result",
|
||||
toolUseId: itemId,
|
||||
result: exitCode != null ? `${output}\n[exit code: ${exitCode}]` : output,
|
||||
});
|
||||
}
|
||||
return messages;
|
||||
}
|
||||
|
||||
case "file_change": {
|
||||
const changes = item.changes as Array<{ path: string; kind: string }> | undefined;
|
||||
if (isStarted || isCompleted) {
|
||||
return [{
|
||||
type: "tool_use",
|
||||
toolName: "FileChange",
|
||||
toolInput: { changes: changes ?? [] },
|
||||
toolUseId: itemId,
|
||||
}];
|
||||
}
|
||||
return [];
|
||||
}
|
||||
|
||||
case "mcp_tool_call": {
|
||||
const messages: AIMessage[] = [];
|
||||
if (isStarted) {
|
||||
messages.push({
|
||||
type: "tool_use",
|
||||
toolName: `${item.server as string}/${item.tool as string}`,
|
||||
toolInput: (item.arguments as Record<string, unknown>) ?? {},
|
||||
toolUseId: itemId,
|
||||
});
|
||||
}
|
||||
if (isCompleted) {
|
||||
if (item.result != null) {
|
||||
messages.push({
|
||||
type: "tool_result",
|
||||
toolUseId: itemId,
|
||||
result: typeof item.result === "string" ? item.result : JSON.stringify(item.result),
|
||||
});
|
||||
}
|
||||
if (item.error) {
|
||||
const err = item.error as Record<string, unknown>;
|
||||
messages.push({
|
||||
type: "error",
|
||||
error: (err.message as string) ?? "MCP tool call failed",
|
||||
code: "mcp_error",
|
||||
});
|
||||
}
|
||||
}
|
||||
return messages;
|
||||
}
|
||||
|
||||
case "error":
|
||||
return [{
|
||||
type: "error",
|
||||
error: (item.message as string) ?? "Unknown error",
|
||||
}];
|
||||
|
||||
case "reasoning":
|
||||
case "web_search":
|
||||
case "todo_list":
|
||||
return [{ type: "unknown", raw: { eventType, item } }];
|
||||
|
||||
default:
|
||||
return [{ type: "unknown", raw: { eventType, item } }];
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Exported for testing
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export { mapCodexEvent, mapCodexItem };
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Factory registration
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
import { registerProviderFactory } from "../provider.ts";
|
||||
|
||||
registerProviderFactory(
|
||||
PROVIDER_NAME,
|
||||
async (config) => new CodexSDKProvider(config as CodexSDKConfig)
|
||||
);
|
||||
547
extensions/plannotator/generated/ai/providers/opencode-sdk.ts
Normal file
547
extensions/plannotator/generated/ai/providers/opencode-sdk.ts
Normal file
@@ -0,0 +1,547 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/providers/opencode-sdk.ts
|
||||
/**
|
||||
* OpenCode provider — bridges Plannotator's AI layer with OpenCode's agent server.
|
||||
*
|
||||
* Uses @opencode-ai/sdk to connect to an existing `opencode serve` first and
|
||||
* only spawns a new server when nothing is reachable. One server is shared
|
||||
* across all sessions. The user must have the `opencode` CLI installed and
|
||||
* authenticated.
|
||||
*/
|
||||
|
||||
import type { OpencodeClient } from "@opencode-ai/sdk";
|
||||
import { BaseSession } from "../base-session.ts";
|
||||
import { buildSystemPrompt } from "../context.ts";
|
||||
import type {
|
||||
AIMessage,
|
||||
AIProvider,
|
||||
AIProviderCapabilities,
|
||||
AISession,
|
||||
CreateSessionOptions,
|
||||
OpenCodeConfig,
|
||||
} from "../types.ts";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Constants
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const PROVIDER_NAME = "opencode-sdk";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// SDK import cache — resolve once, reuse across all sessions
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// biome-ignore lint/suspicious/noExplicitAny: SDK types not available at compile time
|
||||
let sdk: any = null;
|
||||
|
||||
async function getSDK() {
|
||||
if (!sdk) {
|
||||
sdk = await import("@opencode-ai/sdk");
|
||||
}
|
||||
return sdk;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Provider
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class OpenCodeProvider implements AIProvider {
|
||||
readonly name = PROVIDER_NAME;
|
||||
readonly capabilities: AIProviderCapabilities = {
|
||||
fork: true,
|
||||
resume: true,
|
||||
streaming: true,
|
||||
tools: true,
|
||||
};
|
||||
models?: Array<{ id: string; label: string; default?: boolean }>;
|
||||
|
||||
private config: OpenCodeConfig;
|
||||
// biome-ignore lint/suspicious/noExplicitAny: SDK types not available at compile time
|
||||
private server: { url: string; close: () => void } | null = null;
|
||||
private client: OpencodeClient | null = null;
|
||||
private startPromise: Promise<void> | null = null;
|
||||
private lastAttachError: string | null = null;
|
||||
|
||||
constructor(config: OpenCodeConfig) {
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
/** Attach to an existing OpenCode server or spawn one if needed. */
|
||||
async ensureServer(): Promise<void> {
|
||||
if (this.client) return;
|
||||
this.startPromise ??= this.doStart().catch((err) => {
|
||||
this.startPromise = null;
|
||||
throw err;
|
||||
});
|
||||
return this.startPromise;
|
||||
}
|
||||
|
||||
private async doStart(): Promise<void> {
|
||||
this.lastAttachError = null;
|
||||
const { createOpencodeServer, createOpencodeClient } = await getSDK();
|
||||
const attachedClient = await this.tryAttachExistingServer(createOpencodeClient);
|
||||
if (attachedClient) {
|
||||
this.client = attachedClient;
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
this.server = await createOpencodeServer({
|
||||
hostname: this.config.hostname ?? "127.0.0.1",
|
||||
...(this.config.port != null && { port: this.config.port }),
|
||||
timeout: 15_000,
|
||||
});
|
||||
} catch (err) {
|
||||
const spawnMessage = err instanceof Error ? err.message : String(err);
|
||||
if (this.lastAttachError) {
|
||||
throw new Error(`${this.lastAttachError}\nFallback startup also failed: ${spawnMessage}`);
|
||||
}
|
||||
throw err;
|
||||
}
|
||||
|
||||
this.client = createOpencodeClient({
|
||||
baseUrl: this.server!.url,
|
||||
directory: this.config.cwd ?? process.cwd(),
|
||||
});
|
||||
}
|
||||
|
||||
private async tryAttachExistingServer(
|
||||
createOpencodeClient: (config?: { baseUrl?: string; directory?: string }) => OpencodeClient,
|
||||
): Promise<OpencodeClient | null> {
|
||||
const cwd = this.config.cwd ?? process.cwd();
|
||||
const baseUrl = `http://${this.config.hostname ?? "127.0.0.1"}:${this.config.port ?? 4096}`;
|
||||
const client = createOpencodeClient({
|
||||
baseUrl,
|
||||
directory: cwd,
|
||||
});
|
||||
|
||||
try {
|
||||
await client.config.get({
|
||||
throwOnError: true,
|
||||
signal: AbortSignal.timeout(1_000),
|
||||
});
|
||||
return client;
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
this.lastAttachError = `Failed to attach to existing OpenCode server at ${baseUrl}: ${message}`;
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private getClient(): OpencodeClient {
|
||||
if (!this.client) {
|
||||
throw new Error("OpenCode client is not initialized.");
|
||||
}
|
||||
return this.client;
|
||||
}
|
||||
|
||||
async createSession(options: CreateSessionOptions): Promise<AISession> {
|
||||
await this.ensureServer();
|
||||
const client = this.getClient();
|
||||
|
||||
const result = await client.session.create({
|
||||
query: { directory: options.cwd ?? this.config.cwd ?? process.cwd() },
|
||||
});
|
||||
const sessionData = result.data;
|
||||
if (!sessionData) {
|
||||
throw new Error("OpenCode did not return session data.");
|
||||
}
|
||||
|
||||
const session = new OpenCodeSession({
|
||||
sessionId: sessionData.id,
|
||||
systemPrompt: buildSystemPrompt(options.context),
|
||||
client,
|
||||
model: options.model,
|
||||
parentSessionId: null,
|
||||
});
|
||||
return session;
|
||||
}
|
||||
|
||||
async forkSession(options: CreateSessionOptions): Promise<AISession> {
|
||||
await this.ensureServer();
|
||||
const client = this.getClient();
|
||||
|
||||
const parentId = options.context.parent?.sessionId;
|
||||
if (!parentId) {
|
||||
throw new Error("Fork requires a parent session ID.");
|
||||
}
|
||||
|
||||
const result = await client.session.fork({
|
||||
path: { id: parentId },
|
||||
});
|
||||
const sessionData = result.data;
|
||||
if (!sessionData) {
|
||||
throw new Error("OpenCode did not return forked session data.");
|
||||
}
|
||||
|
||||
return new OpenCodeSession({
|
||||
sessionId: sessionData.id,
|
||||
systemPrompt: buildSystemPrompt(options.context),
|
||||
client,
|
||||
model: options.model,
|
||||
parentSessionId: parentId,
|
||||
});
|
||||
}
|
||||
|
||||
async resumeSession(sessionId: string): Promise<AISession> {
|
||||
await this.ensureServer();
|
||||
const client = this.getClient();
|
||||
|
||||
// Verify session exists
|
||||
await client.session.get({ path: { id: sessionId } });
|
||||
|
||||
return new OpenCodeSession({
|
||||
sessionId,
|
||||
systemPrompt: null,
|
||||
client,
|
||||
model: undefined,
|
||||
parentSessionId: null,
|
||||
});
|
||||
}
|
||||
|
||||
dispose(): void {
|
||||
if (this.server) {
|
||||
this.server.close();
|
||||
this.server = null;
|
||||
}
|
||||
this.client = null;
|
||||
this.startPromise = null;
|
||||
}
|
||||
|
||||
/** Fetch available models from OpenCode. Call before registering the provider. */
|
||||
async fetchModels(): Promise<void> {
|
||||
try {
|
||||
await this.ensureServer();
|
||||
const client = this.getClient();
|
||||
|
||||
const result = await client.provider.list({
|
||||
query: { directory: this.config.cwd ?? process.cwd() },
|
||||
});
|
||||
const data = result.data;
|
||||
if (!data) {
|
||||
return;
|
||||
}
|
||||
const connected = new Set(data.connected ?? []);
|
||||
const allProviders = data.all ?? [];
|
||||
|
||||
const models: Array<{ id: string; label: string; default?: boolean }> = [];
|
||||
for (const provider of allProviders) {
|
||||
if (!connected.has(provider.id)) continue;
|
||||
for (const model of Object.values(provider.models)) {
|
||||
models.push({
|
||||
id: `${provider.id}/${model.id}`,
|
||||
label: model.name ?? model.id,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
if (models.length > 0) {
|
||||
// Mark first model as default
|
||||
models[0].default = true;
|
||||
this.models = models;
|
||||
}
|
||||
} catch {
|
||||
// OpenCode not configured or no models available
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Session
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface SessionConfig {
|
||||
sessionId: string;
|
||||
systemPrompt: string | null;
|
||||
// biome-ignore lint/suspicious/noExplicitAny: SDK types not available at compile time
|
||||
client: any;
|
||||
/** Model in "providerID/modelID" format. */
|
||||
model?: string;
|
||||
parentSessionId: string | null;
|
||||
}
|
||||
|
||||
class OpenCodeSession extends BaseSession {
|
||||
private config: SessionConfig;
|
||||
|
||||
constructor(config: SessionConfig) {
|
||||
super({
|
||||
parentSessionId: config.parentSessionId,
|
||||
initialId: config.sessionId,
|
||||
});
|
||||
this.config = config;
|
||||
this._resolvedId = config.sessionId;
|
||||
}
|
||||
|
||||
async *query(prompt: string): AsyncIterable<AIMessage> {
|
||||
const started = this.startQuery();
|
||||
if (!started) {
|
||||
yield BaseSession.BUSY_ERROR;
|
||||
return;
|
||||
}
|
||||
const { gen } = started;
|
||||
|
||||
try {
|
||||
// Build model param if specified
|
||||
let modelParam: { providerID: string; modelID: string } | undefined;
|
||||
if (this.config.model) {
|
||||
const [providerID, ...rest] = this.config.model.split("/");
|
||||
const modelID = rest.join("/");
|
||||
if (providerID && modelID) {
|
||||
modelParam = { providerID, modelID };
|
||||
}
|
||||
}
|
||||
|
||||
// Subscribe to SSE events
|
||||
const { stream } = await this.config.client.event.subscribe();
|
||||
|
||||
try {
|
||||
// Send prompt asynchronously
|
||||
try {
|
||||
await this.config.client.session.promptAsync({
|
||||
path: { id: this.config.sessionId },
|
||||
body: {
|
||||
...(!this._firstQuerySent &&
|
||||
this.config.systemPrompt && {
|
||||
system: this.config.systemPrompt,
|
||||
}),
|
||||
...(modelParam && { model: modelParam }),
|
||||
parts: [{ type: "text", text: prompt }],
|
||||
},
|
||||
});
|
||||
} catch (err) {
|
||||
yield {
|
||||
type: "error",
|
||||
error: `OpenCode rejected prompt: ${err instanceof Error ? err.message : String(err)}`,
|
||||
code: "opencode_prompt_rejected",
|
||||
};
|
||||
return;
|
||||
}
|
||||
this._firstQuerySent = true;
|
||||
|
||||
// Drain SSE events filtered by session ID
|
||||
for await (const event of stream) {
|
||||
const eventType = event.type as string;
|
||||
const props = event.properties as Record<string, unknown> | undefined;
|
||||
if (!props) continue;
|
||||
|
||||
// Filter: only events for our session
|
||||
const eventSessionId =
|
||||
(props.sessionID as string) ??
|
||||
((props.info as Record<string, unknown>)?.sessionID as string) ??
|
||||
((props.part as Record<string, unknown>)?.sessionID as string);
|
||||
if (eventSessionId && eventSessionId !== this.config.sessionId) continue;
|
||||
|
||||
const mapped = mapOpenCodeEvent(eventType, props, this.id);
|
||||
for (const msg of mapped) {
|
||||
yield msg;
|
||||
if (msg.type === "result" || (msg.type === "error" && isTerminalEvent(eventType))) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
stream.return?.();
|
||||
}
|
||||
} catch (err) {
|
||||
yield {
|
||||
type: "error",
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
code: "provider_error",
|
||||
};
|
||||
} finally {
|
||||
this.endQuery(gen);
|
||||
}
|
||||
}
|
||||
|
||||
abort(): void {
|
||||
this.config.client.session
|
||||
.abort({ path: { id: this.config.sessionId } })
|
||||
.catch(() => {});
|
||||
super.abort();
|
||||
}
|
||||
|
||||
respondToPermission(
|
||||
requestId: string,
|
||||
allow: boolean,
|
||||
_message?: string,
|
||||
): void {
|
||||
this.config.client
|
||||
.postSessionIdPermissionsPermissionId({
|
||||
path: { id: this.config.sessionId, permissionID: requestId },
|
||||
body: { response: allow ? "once" : "reject" },
|
||||
})
|
||||
.catch(() => {});
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Event mapping
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Returns true for events that should terminate the query when mapped to an error. */
|
||||
function isTerminalEvent(eventType: string): boolean {
|
||||
return eventType === "session.error" || eventType === "session.status";
|
||||
}
|
||||
|
||||
/**
|
||||
* Map an OpenCode SSE event to AIMessage[].
|
||||
*
|
||||
* Key events:
|
||||
* message.part.delta → text_delta (streaming text)
|
||||
* message.part.updated → tool_use / tool_result (tool lifecycle)
|
||||
* permission.updated → permission_request
|
||||
* session.status → result (when idle)
|
||||
* message.updated → error (when message has error)
|
||||
*/
|
||||
export function mapOpenCodeEvent(
|
||||
eventType: string,
|
||||
props: Record<string, unknown>,
|
||||
sessionId: string,
|
||||
): AIMessage[] {
|
||||
switch (eventType) {
|
||||
case "message.part.delta": {
|
||||
const field = props.field as string;
|
||||
const delta = props.delta as string;
|
||||
if (field === "text" && delta) {
|
||||
return [{ type: "text_delta", delta }];
|
||||
}
|
||||
return [];
|
||||
}
|
||||
|
||||
case "message.part.updated": {
|
||||
const part = props.part as Record<string, unknown>;
|
||||
if (!part) return [];
|
||||
|
||||
const partType = part.type as string;
|
||||
|
||||
if (partType === "tool") {
|
||||
const state = part.state as Record<string, unknown>;
|
||||
if (!state) return [];
|
||||
|
||||
const status = state.status as string;
|
||||
const callID = (part.callID as string) ?? (part.id as string);
|
||||
const toolName = part.tool as string;
|
||||
|
||||
switch (status) {
|
||||
case "running":
|
||||
return [
|
||||
{
|
||||
type: "tool_use",
|
||||
toolName: toolName ?? "unknown",
|
||||
toolInput: (state.input as Record<string, unknown>) ?? {},
|
||||
toolUseId: callID,
|
||||
},
|
||||
];
|
||||
|
||||
case "completed": {
|
||||
const output = (state.output as string) ?? "";
|
||||
return [
|
||||
{
|
||||
type: "tool_result",
|
||||
toolUseId: callID,
|
||||
result: output,
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
case "error": {
|
||||
const error = (state.error as string) ?? "Tool execution failed";
|
||||
return [
|
||||
{
|
||||
type: "tool_result",
|
||||
toolUseId: callID,
|
||||
result: `[Error] ${error}`,
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
default:
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
return [];
|
||||
}
|
||||
|
||||
case "permission.updated": {
|
||||
const id = props.id as string;
|
||||
const permType = props.type as string;
|
||||
const title = props.title as string;
|
||||
const callID = props.callID as string;
|
||||
const metadata = (props.metadata as Record<string, unknown>) ?? {};
|
||||
|
||||
return [
|
||||
{
|
||||
type: "permission_request",
|
||||
requestId: id,
|
||||
toolName: permType ?? "unknown",
|
||||
toolInput: metadata,
|
||||
title: title ?? permType,
|
||||
toolUseId: callID ?? id,
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
case "session.status": {
|
||||
const status = props.status as Record<string, unknown>;
|
||||
if (status?.type === "idle") {
|
||||
return [
|
||||
{
|
||||
type: "result",
|
||||
sessionId,
|
||||
success: true,
|
||||
},
|
||||
];
|
||||
}
|
||||
return [];
|
||||
}
|
||||
|
||||
case "session.error": {
|
||||
const error = props.error as Record<string, unknown>;
|
||||
const message =
|
||||
(error?.message as string) ?? (props.message as string) ?? "Session error";
|
||||
return [
|
||||
{
|
||||
type: "error",
|
||||
error: message,
|
||||
code: "opencode_session_error",
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
case "message.updated": {
|
||||
const info = props.info as Record<string, unknown>;
|
||||
if (!info) return [];
|
||||
|
||||
const msgError = info.error as Record<string, unknown>;
|
||||
if (msgError) {
|
||||
const errorData = msgError.data as Record<string, unknown>;
|
||||
const message =
|
||||
(errorData?.message as string) ??
|
||||
(msgError.name as string) ??
|
||||
"Message error";
|
||||
return [
|
||||
{
|
||||
type: "error",
|
||||
error: message,
|
||||
code: "opencode_message_error",
|
||||
},
|
||||
];
|
||||
}
|
||||
return [];
|
||||
}
|
||||
|
||||
default:
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Factory registration
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
import { registerProviderFactory } from "../provider.ts";
|
||||
|
||||
registerProviderFactory(
|
||||
PROVIDER_NAME,
|
||||
async (config) => new OpenCodeProvider(config as OpenCodeConfig),
|
||||
);
|
||||
111
extensions/plannotator/generated/ai/providers/pi-events.ts
Normal file
111
extensions/plannotator/generated/ai/providers/pi-events.ts
Normal file
@@ -0,0 +1,111 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/providers/pi-events.ts
|
||||
/**
|
||||
* Pi event mapping — shared between Bun and Node.js Pi providers.
|
||||
*
|
||||
* Pure function, no runtime-specific dependencies.
|
||||
*/
|
||||
|
||||
import type { AIMessage } from "../types.ts";
|
||||
|
||||
/**
|
||||
* Map a Pi AgentEvent (received as JSONL) to AIMessage[].
|
||||
*
|
||||
* Pi event hierarchy:
|
||||
* agent_start > turn_start > message_start > message_update* > message_end
|
||||
* > tool_execution_start > tool_execution_end > turn_end > agent_end
|
||||
*
|
||||
* We extract:
|
||||
* - text_delta from message_update.assistantMessageEvent
|
||||
* - tool_use from toolcall_end
|
||||
* - tool_result from tool_execution_end
|
||||
* - result from agent_end
|
||||
*/
|
||||
export function mapPiEvent(
|
||||
event: Record<string, unknown>,
|
||||
sessionId: string,
|
||||
): AIMessage[] {
|
||||
const eventType = event.type as string;
|
||||
|
||||
switch (eventType) {
|
||||
case "message_update": {
|
||||
const ame = event.assistantMessageEvent as
|
||||
| Record<string, unknown>
|
||||
| undefined;
|
||||
if (!ame) return [];
|
||||
|
||||
const subType = ame.type as string;
|
||||
|
||||
switch (subType) {
|
||||
case "text_delta":
|
||||
return [{ type: "text_delta", delta: ame.delta as string }];
|
||||
|
||||
case "toolcall_end": {
|
||||
const tc = ame.toolCall as Record<string, unknown>;
|
||||
if (!tc) return [];
|
||||
return [
|
||||
{
|
||||
type: "tool_use",
|
||||
toolName: tc.name as string,
|
||||
toolInput: (tc.arguments as Record<string, unknown>) ?? {},
|
||||
toolUseId: tc.id as string,
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
case "error": {
|
||||
const partial = ame.error as Record<string, unknown> | undefined;
|
||||
const errorMessage =
|
||||
(partial?.errorMessage as string) ?? "Stream error";
|
||||
return [
|
||||
{ type: "error", error: errorMessage, code: "pi_stream_error" },
|
||||
];
|
||||
}
|
||||
|
||||
default:
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
case "tool_execution_end": {
|
||||
const result = event.result;
|
||||
const isError = event.isError as boolean;
|
||||
const resultStr =
|
||||
result == null
|
||||
? ""
|
||||
: typeof result === "string"
|
||||
? result
|
||||
: JSON.stringify(result);
|
||||
|
||||
return [
|
||||
{
|
||||
type: "tool_result",
|
||||
toolUseId: event.toolCallId as string,
|
||||
result: isError
|
||||
? `[Error] ${resultStr || "Tool execution failed"}`
|
||||
: resultStr,
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
case "agent_end":
|
||||
return [
|
||||
{
|
||||
type: "result",
|
||||
sessionId,
|
||||
success: true,
|
||||
},
|
||||
];
|
||||
|
||||
case "process_exited":
|
||||
return [
|
||||
{
|
||||
type: "error",
|
||||
error: "Pi process exited unexpectedly.",
|
||||
code: "pi_process_exit",
|
||||
},
|
||||
];
|
||||
|
||||
default:
|
||||
return [];
|
||||
}
|
||||
}
|
||||
377
extensions/plannotator/generated/ai/providers/pi-sdk-node.ts
Normal file
377
extensions/plannotator/generated/ai/providers/pi-sdk-node.ts
Normal file
@@ -0,0 +1,377 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/providers/pi-sdk-node.ts
|
||||
/**
|
||||
* Pi SDK provider — Node.js variant.
|
||||
*
|
||||
* Identical to pi-sdk.ts except PiProcess uses child_process.spawn()
|
||||
* instead of Bun.spawn(). Everything else (PiSDKProvider, PiSDKSession,
|
||||
* mapPiEvent) is re-exported from the Bun version unchanged.
|
||||
*
|
||||
* Used by the Pi extension which runs under jiti (Node.js).
|
||||
*/
|
||||
|
||||
import { spawn, type ChildProcess } from "node:child_process";
|
||||
import { BaseSession } from "../base-session.ts";
|
||||
import { buildEffectivePrompt, buildSystemPrompt } from "../context.ts";
|
||||
import type {
|
||||
AIMessage,
|
||||
AIProvider,
|
||||
AIProviderCapabilities,
|
||||
CreateSessionOptions,
|
||||
PiSDKConfig,
|
||||
} from "../types.ts";
|
||||
import { registerProviderFactory } from "../provider.ts";
|
||||
|
||||
// Re-export mapPiEvent from shared (runtime-agnostic)
|
||||
export { mapPiEvent } from "./pi-events.ts";
|
||||
|
||||
const PROVIDER_NAME = "pi-sdk";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// JSONL subprocess wrapper (Node.js)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
type EventListener = (event: Record<string, unknown>) => void;
|
||||
|
||||
class PiProcessNode {
|
||||
private proc: ChildProcess | null = null;
|
||||
private listeners: EventListener[] = [];
|
||||
private pendingRequests = new Map<
|
||||
string,
|
||||
{
|
||||
resolve: (data: Record<string, unknown>) => void;
|
||||
reject: (err: Error) => void;
|
||||
}
|
||||
>();
|
||||
private nextId = 0;
|
||||
private buffer = "";
|
||||
private _alive = false;
|
||||
|
||||
async spawn(piPath: string, cwd: string): Promise<void> {
|
||||
this.proc = spawn(piPath, ["--mode", "rpc"], {
|
||||
cwd,
|
||||
stdio: ["pipe", "pipe", "pipe"],
|
||||
});
|
||||
this._alive = true;
|
||||
|
||||
this.readStream();
|
||||
|
||||
this.proc.on("exit", () => {
|
||||
this._alive = false;
|
||||
for (const [, pending] of this.pendingRequests) {
|
||||
pending.reject(new Error("Pi process exited unexpectedly"));
|
||||
}
|
||||
this.pendingRequests.clear();
|
||||
for (const listener of this.listeners) {
|
||||
listener({ type: "process_exited" });
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
private readStream(): void {
|
||||
if (!this.proc?.stdout) return;
|
||||
|
||||
this.proc.stdout.on("data", (chunk: Buffer) => {
|
||||
this.buffer += chunk.toString();
|
||||
const lines = this.buffer.split("\n");
|
||||
this.buffer = lines.pop() ?? "";
|
||||
|
||||
for (const line of lines) {
|
||||
const trimmed = line.replace(/\r$/, "");
|
||||
if (!trimmed) continue;
|
||||
try {
|
||||
const parsed = JSON.parse(trimmed);
|
||||
this.routeMessage(parsed);
|
||||
} catch {
|
||||
// Ignore malformed lines
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
private routeMessage(msg: Record<string, unknown>): void {
|
||||
if (msg.type === "response" && typeof msg.id === "string") {
|
||||
const pending = this.pendingRequests.get(msg.id);
|
||||
if (pending) {
|
||||
this.pendingRequests.delete(msg.id);
|
||||
if (msg.success === false) {
|
||||
pending.reject(new Error((msg.error as string) ?? "RPC error"));
|
||||
} else {
|
||||
pending.resolve((msg.data as Record<string, unknown>) ?? {});
|
||||
}
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
for (const listener of this.listeners) {
|
||||
listener(msg);
|
||||
}
|
||||
}
|
||||
|
||||
send(command: Record<string, unknown>): void {
|
||||
if (!this.proc?.stdin || this.proc.stdin.destroyed) return;
|
||||
this.proc.stdin.write(`${JSON.stringify(command)}\n`);
|
||||
}
|
||||
|
||||
sendAndWait(
|
||||
command: Record<string, unknown>,
|
||||
): Promise<Record<string, unknown>> {
|
||||
const id = `req_${++this.nextId}`;
|
||||
return new Promise((resolve, reject) => {
|
||||
this.pendingRequests.set(id, { resolve, reject });
|
||||
this.send({ ...command, id });
|
||||
});
|
||||
}
|
||||
|
||||
onEvent(listener: EventListener): () => void {
|
||||
this.listeners.push(listener);
|
||||
return () => {
|
||||
const idx = this.listeners.indexOf(listener);
|
||||
if (idx >= 0) this.listeners.splice(idx, 1);
|
||||
};
|
||||
}
|
||||
|
||||
get alive(): boolean {
|
||||
return this._alive;
|
||||
}
|
||||
|
||||
kill(): void {
|
||||
this._alive = false;
|
||||
if (this.proc) {
|
||||
this.proc.kill();
|
||||
this.proc = null;
|
||||
}
|
||||
this.listeners.length = 0;
|
||||
for (const [, pending] of this.pendingRequests) {
|
||||
pending.reject(new Error("Process killed"));
|
||||
}
|
||||
this.pendingRequests.clear();
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Provider (identical to pi-sdk.ts, using PiProcessNode)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class PiSDKNodeProvider implements AIProvider {
|
||||
readonly name = PROVIDER_NAME;
|
||||
readonly capabilities: AIProviderCapabilities = {
|
||||
fork: false,
|
||||
resume: false,
|
||||
streaming: true,
|
||||
tools: true,
|
||||
};
|
||||
models?: Array<{ id: string; label: string; default?: boolean }>;
|
||||
|
||||
private config: PiSDKConfig;
|
||||
private sessions = new Map<string, PiSDKNodeSession>();
|
||||
|
||||
constructor(config: PiSDKConfig) {
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
async createSession(options: CreateSessionOptions): Promise<PiSDKNodeSession> {
|
||||
const session = new PiSDKNodeSession({
|
||||
systemPrompt: buildSystemPrompt(options.context),
|
||||
cwd: options.cwd ?? this.config.cwd ?? process.cwd(),
|
||||
parentSessionId: null,
|
||||
piExecutablePath: this.config.piExecutablePath ?? "pi",
|
||||
model: options.model ?? this.config.model,
|
||||
});
|
||||
this.sessions.set(session.id, session);
|
||||
return session;
|
||||
}
|
||||
|
||||
async forkSession(): Promise<never> {
|
||||
throw new Error(
|
||||
"Pi does not support session forking. " +
|
||||
"The endpoint layer should fall back to createSession().",
|
||||
);
|
||||
}
|
||||
|
||||
async resumeSession(): Promise<never> {
|
||||
throw new Error("Pi does not support session resuming.");
|
||||
}
|
||||
|
||||
dispose(): void {
|
||||
for (const session of this.sessions.values()) {
|
||||
session.killProcess();
|
||||
}
|
||||
this.sessions.clear();
|
||||
}
|
||||
|
||||
async fetchModels(): Promise<void> {
|
||||
const piPath = this.config.piExecutablePath ?? "pi";
|
||||
let proc: PiProcessNode | undefined;
|
||||
try {
|
||||
proc = new PiProcessNode();
|
||||
await proc.spawn(piPath, this.config.cwd ?? process.cwd());
|
||||
const data = await Promise.race([
|
||||
proc.sendAndWait({ type: "get_available_models" }),
|
||||
new Promise<never>((_, reject) =>
|
||||
setTimeout(() => reject(new Error("Timeout")), 10_000),
|
||||
),
|
||||
]);
|
||||
const rawModels = (
|
||||
data as { models?: Array<{ provider: string; id: string; name?: string }> }
|
||||
).models;
|
||||
if (rawModels && rawModels.length > 0) {
|
||||
this.models = rawModels.map((m, i) => ({
|
||||
id: `${m.provider}/${m.id}`,
|
||||
label: m.name ?? m.id,
|
||||
...(i === 0 && { default: true }),
|
||||
}));
|
||||
}
|
||||
} catch {
|
||||
// Pi not configured or no models available
|
||||
} finally {
|
||||
proc?.kill();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Session (identical to pi-sdk.ts, using PiProcessNode)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface SessionConfig {
|
||||
systemPrompt: string;
|
||||
cwd: string;
|
||||
parentSessionId: string | null;
|
||||
piExecutablePath: string;
|
||||
model?: string;
|
||||
}
|
||||
|
||||
class PiSDKNodeSession extends BaseSession {
|
||||
private config: SessionConfig;
|
||||
private process: PiProcessNode | null = null;
|
||||
|
||||
constructor(config: SessionConfig) {
|
||||
super({ parentSessionId: config.parentSessionId });
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
async *query(prompt: string): AsyncIterable<AIMessage> {
|
||||
const { mapPiEvent } = await import("./pi-events.ts");
|
||||
|
||||
const started = this.startQuery();
|
||||
if (!started) {
|
||||
yield BaseSession.BUSY_ERROR;
|
||||
return;
|
||||
}
|
||||
const { gen } = started;
|
||||
|
||||
try {
|
||||
if (!this.process || !this.process.alive) {
|
||||
this.process = new PiProcessNode();
|
||||
await this.process.spawn(this.config.piExecutablePath, this.config.cwd);
|
||||
|
||||
if (this.config.model) {
|
||||
const [provider, ...rest] = this.config.model.split("/");
|
||||
const modelId = rest.join("/");
|
||||
if (provider && modelId) {
|
||||
try {
|
||||
await this.process.sendAndWait({ type: "set_model", provider, modelId });
|
||||
} catch { /* Continue with Pi's default model */ }
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
const state = await this.process.sendAndWait({ type: "get_state" });
|
||||
if (typeof state.sessionId === "string") {
|
||||
this.resolveId(state.sessionId);
|
||||
}
|
||||
} catch { /* Continue with placeholder ID */ }
|
||||
|
||||
if (!this.process.alive) {
|
||||
yield {
|
||||
type: "error",
|
||||
error: "Pi process exited during startup. Check that Pi is configured correctly (API keys, models).",
|
||||
code: "pi_startup_error",
|
||||
};
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
const effectivePrompt = buildEffectivePrompt(
|
||||
prompt,
|
||||
this.config.systemPrompt,
|
||||
this._firstQuerySent,
|
||||
);
|
||||
|
||||
const queue: AIMessage[] = [];
|
||||
let resolve: (() => void) | null = null;
|
||||
let done = false;
|
||||
|
||||
const push = (msg: AIMessage) => { queue.push(msg); resolve?.(); };
|
||||
const finish = () => { done = true; resolve?.(); };
|
||||
|
||||
const unsubscribe = this.process.onEvent((event) => {
|
||||
const mapped = mapPiEvent(event, this.id);
|
||||
for (const msg of mapped) {
|
||||
push(msg);
|
||||
if (
|
||||
msg.type === "result" ||
|
||||
(msg.type === "error" && (event.type === "agent_end" || event.type === "process_exited"))
|
||||
) {
|
||||
finish();
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
try {
|
||||
await this.process.sendAndWait({ type: "prompt", message: effectivePrompt });
|
||||
} catch (err) {
|
||||
unsubscribe();
|
||||
yield {
|
||||
type: "error",
|
||||
error: `Pi rejected prompt: ${err instanceof Error ? err.message : String(err)}`,
|
||||
code: "pi_prompt_rejected",
|
||||
};
|
||||
return;
|
||||
}
|
||||
this._firstQuerySent = true;
|
||||
|
||||
try {
|
||||
while (!done || queue.length > 0) {
|
||||
if (queue.length > 0) {
|
||||
yield queue.shift()!;
|
||||
} else {
|
||||
await new Promise<void>((r) => { resolve = r; });
|
||||
resolve = null;
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
unsubscribe();
|
||||
}
|
||||
} catch (err) {
|
||||
yield {
|
||||
type: "error",
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
code: "provider_error",
|
||||
};
|
||||
} finally {
|
||||
this.endQuery(gen);
|
||||
}
|
||||
}
|
||||
|
||||
abort(): void {
|
||||
if (this.process?.alive) {
|
||||
this.process.send({ type: "abort" });
|
||||
}
|
||||
super.abort();
|
||||
}
|
||||
|
||||
killProcess(): void {
|
||||
this.process?.kill();
|
||||
this.process = null;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Factory registration
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
registerProviderFactory(
|
||||
PROVIDER_NAME,
|
||||
async (config) => new PiSDKNodeProvider(config as PiSDKConfig),
|
||||
);
|
||||
442
extensions/plannotator/generated/ai/providers/pi-sdk.ts
Normal file
442
extensions/plannotator/generated/ai/providers/pi-sdk.ts
Normal file
@@ -0,0 +1,442 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/providers/pi-sdk.ts
|
||||
/**
|
||||
* Pi SDK provider — bridges Plannotator's AI layer with Pi's coding agent.
|
||||
*
|
||||
* Spawns `pi --mode rpc` as a subprocess and communicates via JSONL over
|
||||
* stdio. No Pi SDK is imported — this is a thin protocol adapter.
|
||||
*
|
||||
* One subprocess per session. The user must have the `pi` CLI installed.
|
||||
*/
|
||||
|
||||
import { BaseSession } from "../base-session.ts";
|
||||
import { buildEffectivePrompt, buildSystemPrompt } from "../context.ts";
|
||||
import type {
|
||||
AIMessage,
|
||||
AIProvider,
|
||||
AIProviderCapabilities,
|
||||
CreateSessionOptions,
|
||||
PiSDKConfig,
|
||||
} from "../types.ts";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Constants
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const PROVIDER_NAME = "pi-sdk";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// JSONL subprocess wrapper
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
type EventListener = (event: Record<string, unknown>) => void;
|
||||
|
||||
class PiProcess {
|
||||
private proc: ReturnType<typeof Bun.spawn> | null = null;
|
||||
private listeners: EventListener[] = [];
|
||||
private pendingRequests = new Map<
|
||||
string,
|
||||
{
|
||||
resolve: (data: Record<string, unknown>) => void;
|
||||
reject: (err: Error) => void;
|
||||
}
|
||||
>();
|
||||
private nextId = 0;
|
||||
private buffer = "";
|
||||
private _alive = false;
|
||||
|
||||
async spawn(piPath: string, cwd: string): Promise<void> {
|
||||
this.proc = Bun.spawn([piPath, "--mode", "rpc"], {
|
||||
cwd,
|
||||
stdin: "pipe",
|
||||
stdout: "pipe",
|
||||
stderr: "pipe",
|
||||
});
|
||||
this._alive = true;
|
||||
|
||||
this.readStream();
|
||||
|
||||
this.proc.exited.then(() => {
|
||||
this._alive = false;
|
||||
for (const [, pending] of this.pendingRequests) {
|
||||
pending.reject(new Error("Pi process exited unexpectedly"));
|
||||
}
|
||||
this.pendingRequests.clear();
|
||||
// Signal active query listeners so the drain loop exits with an error
|
||||
for (const listener of this.listeners) {
|
||||
listener({ type: "process_exited" });
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
private async readStream(): Promise<void> {
|
||||
if (!this.proc?.stdout || typeof this.proc.stdout === "number") return;
|
||||
const reader = (this.proc.stdout as ReadableStream<Uint8Array>).getReader();
|
||||
const decoder = new TextDecoder();
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
|
||||
this.buffer += decoder.decode(value, { stream: true });
|
||||
const lines = this.buffer.split("\n");
|
||||
this.buffer = lines.pop() ?? "";
|
||||
|
||||
for (const line of lines) {
|
||||
const trimmed = line.replace(/\r$/, "");
|
||||
if (!trimmed) continue;
|
||||
try {
|
||||
const parsed = JSON.parse(trimmed);
|
||||
this.routeMessage(parsed);
|
||||
} catch {
|
||||
// Ignore malformed lines
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Stream closed
|
||||
}
|
||||
}
|
||||
|
||||
private routeMessage(msg: Record<string, unknown>): void {
|
||||
// Response to a command we sent
|
||||
if (msg.type === "response" && typeof msg.id === "string") {
|
||||
const pending = this.pendingRequests.get(msg.id);
|
||||
if (pending) {
|
||||
this.pendingRequests.delete(msg.id);
|
||||
if (msg.success === false) {
|
||||
pending.reject(new Error((msg.error as string) ?? "RPC error"));
|
||||
} else {
|
||||
pending.resolve((msg.data as Record<string, unknown>) ?? {});
|
||||
}
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Agent event — forward to listeners
|
||||
for (const listener of this.listeners) {
|
||||
listener(msg);
|
||||
}
|
||||
}
|
||||
|
||||
/** Send a command without waiting for a response. */
|
||||
send(command: Record<string, unknown>): void {
|
||||
if (!this.proc?.stdin || typeof this.proc.stdin === "number") return;
|
||||
// Bun.spawn stdin is a FileSink with .write(), not a WritableStream
|
||||
const sink = this.proc.stdin as { write(data: string): void; flush(): void };
|
||||
sink.write(`${JSON.stringify(command)}\n`);
|
||||
sink.flush();
|
||||
}
|
||||
|
||||
/** Send a command and wait for the correlated response. */
|
||||
sendAndWait(
|
||||
command: Record<string, unknown>,
|
||||
): Promise<Record<string, unknown>> {
|
||||
const id = `req_${++this.nextId}`;
|
||||
return new Promise((resolve, reject) => {
|
||||
this.pendingRequests.set(id, { resolve, reject });
|
||||
this.send({ ...command, id });
|
||||
});
|
||||
}
|
||||
|
||||
/** Register a listener for agent events (non-response messages). */
|
||||
onEvent(listener: EventListener): () => void {
|
||||
this.listeners.push(listener);
|
||||
return () => {
|
||||
const idx = this.listeners.indexOf(listener);
|
||||
if (idx >= 0) this.listeners.splice(idx, 1);
|
||||
};
|
||||
}
|
||||
|
||||
get alive(): boolean {
|
||||
return this._alive;
|
||||
}
|
||||
|
||||
kill(): void {
|
||||
this._alive = false;
|
||||
if (this.proc) {
|
||||
this.proc.kill();
|
||||
this.proc = null;
|
||||
}
|
||||
this.listeners.length = 0;
|
||||
for (const [, pending] of this.pendingRequests) {
|
||||
pending.reject(new Error("Process killed"));
|
||||
}
|
||||
this.pendingRequests.clear();
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Provider
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class PiSDKProvider implements AIProvider {
|
||||
readonly name = PROVIDER_NAME;
|
||||
readonly capabilities: AIProviderCapabilities = {
|
||||
fork: false,
|
||||
resume: false,
|
||||
streaming: true,
|
||||
tools: true,
|
||||
};
|
||||
models?: Array<{ id: string; label: string; default?: boolean }>;
|
||||
|
||||
private config: PiSDKConfig;
|
||||
private sessions = new Map<string, PiSDKSession>();
|
||||
|
||||
constructor(config: PiSDKConfig) {
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
async createSession(options: CreateSessionOptions): Promise<PiSDKSession> {
|
||||
const session = new PiSDKSession({
|
||||
systemPrompt: buildSystemPrompt(options.context),
|
||||
cwd: options.cwd ?? this.config.cwd ?? process.cwd(),
|
||||
parentSessionId: null,
|
||||
piExecutablePath: this.config.piExecutablePath ?? "pi",
|
||||
model: options.model ?? this.config.model,
|
||||
});
|
||||
this.sessions.set(session.id, session);
|
||||
return session;
|
||||
}
|
||||
|
||||
async forkSession(): Promise<never> {
|
||||
throw new Error(
|
||||
"Pi does not support session forking. " +
|
||||
"The endpoint layer should fall back to createSession().",
|
||||
);
|
||||
}
|
||||
|
||||
async resumeSession(): Promise<never> {
|
||||
throw new Error("Pi does not support session resuming.");
|
||||
}
|
||||
|
||||
dispose(): void {
|
||||
for (const session of this.sessions.values()) {
|
||||
session.killProcess();
|
||||
}
|
||||
this.sessions.clear();
|
||||
}
|
||||
|
||||
/** Fetch available models from Pi. Call before registering the provider. */
|
||||
async fetchModels(): Promise<void> {
|
||||
const piPath = this.config.piExecutablePath ?? "pi";
|
||||
|
||||
let proc: PiProcess | undefined;
|
||||
|
||||
try {
|
||||
proc = new PiProcess();
|
||||
await proc.spawn(piPath, this.config.cwd ?? process.cwd());
|
||||
|
||||
const data = await Promise.race([
|
||||
proc.sendAndWait({ type: "get_available_models" }),
|
||||
new Promise<never>((_, reject) =>
|
||||
setTimeout(() => reject(new Error("Timeout")), 10_000),
|
||||
),
|
||||
]);
|
||||
|
||||
const rawModels = (
|
||||
data as {
|
||||
models?: Array<{ provider: string; id: string; name?: string }>;
|
||||
}
|
||||
).models;
|
||||
if (rawModels && rawModels.length > 0) {
|
||||
this.models = rawModels.map((m, i) => ({
|
||||
id: `${m.provider}/${m.id}`,
|
||||
label: m.name ?? m.id,
|
||||
...(i === 0 && { default: true }),
|
||||
}));
|
||||
}
|
||||
} catch {
|
||||
// Pi not configured or no models available
|
||||
} finally {
|
||||
proc?.kill();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Session
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface SessionConfig {
|
||||
systemPrompt: string;
|
||||
cwd: string;
|
||||
parentSessionId: string | null;
|
||||
piExecutablePath: string;
|
||||
/** Model in "provider/modelId" format, e.g. "anthropic/claude-haiku-4-5". */
|
||||
model?: string;
|
||||
}
|
||||
|
||||
class PiSDKSession extends BaseSession {
|
||||
private config: SessionConfig;
|
||||
private process: PiProcess | null = null;
|
||||
|
||||
constructor(config: SessionConfig) {
|
||||
super({ parentSessionId: config.parentSessionId });
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
async *query(prompt: string): AsyncIterable<AIMessage> {
|
||||
const started = this.startQuery();
|
||||
if (!started) {
|
||||
yield BaseSession.BUSY_ERROR;
|
||||
return;
|
||||
}
|
||||
const { gen } = started;
|
||||
|
||||
try {
|
||||
// Lazy-spawn subprocess
|
||||
if (!this.process || !this.process.alive) {
|
||||
this.process = new PiProcess();
|
||||
await this.process.spawn(this.config.piExecutablePath, this.config.cwd);
|
||||
|
||||
// Set model if specified (format: "provider/modelId")
|
||||
if (this.config.model) {
|
||||
const [provider, ...rest] = this.config.model.split("/");
|
||||
const modelId = rest.join("/");
|
||||
if (provider && modelId) {
|
||||
try {
|
||||
await this.process.sendAndWait({
|
||||
type: "set_model",
|
||||
provider,
|
||||
modelId,
|
||||
});
|
||||
} catch {
|
||||
// Continue with Pi's default model
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Get session ID
|
||||
try {
|
||||
const state = await this.process.sendAndWait({ type: "get_state" });
|
||||
if (typeof state.sessionId === "string") {
|
||||
this.resolveId(state.sessionId);
|
||||
}
|
||||
} catch {
|
||||
// Continue with placeholder ID
|
||||
}
|
||||
|
||||
// If subprocess died during startup, surface the error immediately
|
||||
if (!this.process.alive) {
|
||||
yield {
|
||||
type: "error",
|
||||
error:
|
||||
"Pi process exited during startup. Check that Pi is configured correctly (API keys, models).",
|
||||
code: "pi_startup_error",
|
||||
};
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Build effective prompt (prepend system prompt on first query)
|
||||
const effectivePrompt = buildEffectivePrompt(
|
||||
prompt,
|
||||
this.config.systemPrompt,
|
||||
this._firstQuerySent,
|
||||
);
|
||||
|
||||
// Set up async queue to bridge callback events → async iterable
|
||||
const queue: AIMessage[] = [];
|
||||
let resolve: (() => void) | null = null;
|
||||
let done = false;
|
||||
|
||||
const push = (msg: AIMessage) => {
|
||||
queue.push(msg);
|
||||
resolve?.();
|
||||
};
|
||||
|
||||
const finish = () => {
|
||||
done = true;
|
||||
resolve?.();
|
||||
};
|
||||
|
||||
const unsubscribe = this.process.onEvent((event) => {
|
||||
const mapped = mapPiEvent(event, this.id);
|
||||
for (const msg of mapped) {
|
||||
push(msg);
|
||||
if (
|
||||
msg.type === "result" ||
|
||||
(msg.type === "error" &&
|
||||
(event.type === "agent_end" || event.type === "process_exited"))
|
||||
) {
|
||||
finish();
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Send prompt — use sendAndWait to catch RPC-level rejections
|
||||
// (e.g. expired credentials, invalid session)
|
||||
try {
|
||||
await this.process.sendAndWait({
|
||||
type: "prompt",
|
||||
message: effectivePrompt,
|
||||
});
|
||||
} catch (err) {
|
||||
unsubscribe();
|
||||
yield {
|
||||
type: "error",
|
||||
error: `Pi rejected prompt: ${err instanceof Error ? err.message : String(err)}`,
|
||||
code: "pi_prompt_rejected",
|
||||
};
|
||||
return;
|
||||
}
|
||||
this._firstQuerySent = true;
|
||||
|
||||
// Drain queue
|
||||
try {
|
||||
while (!done || queue.length > 0) {
|
||||
if (queue.length > 0) {
|
||||
yield queue.shift()!;
|
||||
} else {
|
||||
await new Promise<void>((r) => {
|
||||
resolve = r;
|
||||
});
|
||||
resolve = null;
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
unsubscribe();
|
||||
}
|
||||
} catch (err) {
|
||||
yield {
|
||||
type: "error",
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
code: "provider_error",
|
||||
};
|
||||
} finally {
|
||||
this.endQuery(gen);
|
||||
}
|
||||
}
|
||||
|
||||
abort(): void {
|
||||
if (this.process?.alive) {
|
||||
this.process.send({ type: "abort" });
|
||||
}
|
||||
super.abort();
|
||||
}
|
||||
|
||||
/** Kill the subprocess. Called by the provider on dispose. */
|
||||
killProcess(): void {
|
||||
this.process?.kill();
|
||||
this.process = null;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Event mapping — shared with pi-sdk-node.ts
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
import { mapPiEvent } from "./pi-events.ts";
|
||||
export { mapPiEvent } from "./pi-events.ts";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Factory registration
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
import { registerProviderFactory } from "../provider.ts";
|
||||
|
||||
registerProviderFactory(
|
||||
PROVIDER_NAME,
|
||||
async (config) => new PiSDKProvider(config as PiSDKConfig),
|
||||
);
|
||||
196
extensions/plannotator/generated/ai/session-manager.ts
Normal file
196
extensions/plannotator/generated/ai/session-manager.ts
Normal file
@@ -0,0 +1,196 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/session-manager.ts
|
||||
/**
|
||||
* Session manager — tracks active and historical AI sessions.
|
||||
*
|
||||
* Each Plannotator server instance (plan review, code review, annotate)
|
||||
* gets its own SessionManager. It tracks:
|
||||
*
|
||||
* - Active sessions (currently streaming or idle but resumable)
|
||||
* - The lineage from forked sessions back to their parent
|
||||
* - Metadata for UI display (timestamps, mode, status)
|
||||
*
|
||||
* This is an in-memory store scoped to the server's lifetime. Sessions
|
||||
* are not persisted to disk by the manager (the underlying provider
|
||||
* handles its own persistence via the agent SDK).
|
||||
*/
|
||||
|
||||
import type { AISession, AIContextMode } from "./types.ts";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface SessionEntry {
|
||||
/** The live session handle (if still active). */
|
||||
session: AISession;
|
||||
/** What mode this session was created for. */
|
||||
mode: AIContextMode;
|
||||
/** The parent session ID this was forked from (null if standalone). */
|
||||
parentSessionId: string | null;
|
||||
/** When this session was created. */
|
||||
createdAt: number;
|
||||
/** When the last query was sent. */
|
||||
lastActiveAt: number;
|
||||
/** Short description for UI display (e.g., the user's first question). */
|
||||
label?: string;
|
||||
}
|
||||
|
||||
export interface SessionManagerOptions {
|
||||
/**
|
||||
* Maximum number of sessions to keep in the manager.
|
||||
* Oldest idle sessions are evicted when the limit is reached.
|
||||
* Default: 20.
|
||||
*/
|
||||
maxSessions?: number;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Implementation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class SessionManager {
|
||||
private sessions = new Map<string, SessionEntry>();
|
||||
private aliases = new Map<string, string>();
|
||||
private maxSessions: number;
|
||||
|
||||
constructor(options: SessionManagerOptions = {}) {
|
||||
this.maxSessions = options.maxSessions ?? 20;
|
||||
}
|
||||
|
||||
/**
|
||||
* Track a newly created session.
|
||||
*
|
||||
* If the session supports ID resolution (e.g., the real SDK session ID
|
||||
* arrives after the first query), call `remapId()` to update the key.
|
||||
*/
|
||||
track(session: AISession, mode: AIContextMode, label?: string): SessionEntry {
|
||||
this.evictIfNeeded();
|
||||
|
||||
const entry: SessionEntry = {
|
||||
session,
|
||||
mode,
|
||||
parentSessionId: session.parentSessionId,
|
||||
createdAt: Date.now(),
|
||||
lastActiveAt: Date.now(),
|
||||
label,
|
||||
};
|
||||
this.sessions.set(session.id, entry);
|
||||
|
||||
// Wire up ID remapping so providers can resolve the real session ID later
|
||||
session.onIdResolved = (oldId, newId) => this.remapId(oldId, newId);
|
||||
|
||||
return entry;
|
||||
}
|
||||
|
||||
/**
|
||||
* Remap a session from one ID to another.
|
||||
* Used when the real session ID is resolved after initial tracking.
|
||||
*/
|
||||
remapId(oldId: string, newId: string): void {
|
||||
const entry = this.sessions.get(oldId);
|
||||
if (entry) {
|
||||
this.sessions.delete(oldId);
|
||||
this.sessions.set(newId, entry);
|
||||
// Keep the old ID as an alias so clients using the original ID still work
|
||||
this.aliases.set(oldId, newId);
|
||||
}
|
||||
}
|
||||
|
||||
/** Resolve an alias to the canonical ID, or return the ID as-is. */
|
||||
private resolve(sessionId: string): string {
|
||||
return this.aliases.get(sessionId) ?? sessionId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a tracked session by ID (or alias).
|
||||
*/
|
||||
get(sessionId: string): SessionEntry | undefined {
|
||||
return this.sessions.get(this.resolve(sessionId));
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark a session as recently active (updates lastActiveAt).
|
||||
*/
|
||||
touch(sessionId: string): void {
|
||||
const entry = this.sessions.get(this.resolve(sessionId));
|
||||
if (entry) {
|
||||
entry.lastActiveAt = Date.now();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove a session from tracking.
|
||||
* Does NOT abort the session — call session.abort() first if needed.
|
||||
*/
|
||||
remove(sessionId: string): void {
|
||||
const canonical = this.resolve(sessionId);
|
||||
this.sessions.delete(canonical);
|
||||
// Clean up any aliases pointing to this session
|
||||
for (const [alias, target] of this.aliases) {
|
||||
if (target === canonical) this.aliases.delete(alias);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* List all tracked sessions, newest first.
|
||||
*/
|
||||
list(): SessionEntry[] {
|
||||
return [...this.sessions.values()].sort(
|
||||
(a, b) => b.lastActiveAt - a.lastActiveAt
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* List sessions forked from a specific parent.
|
||||
*/
|
||||
forksOf(parentSessionId: string): SessionEntry[] {
|
||||
return this.list().filter(
|
||||
(e) => e.parentSessionId === parentSessionId
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the number of tracked sessions.
|
||||
*/
|
||||
get size(): number {
|
||||
return this.sessions.size;
|
||||
}
|
||||
|
||||
/**
|
||||
* Abort all active sessions and clear tracking.
|
||||
*/
|
||||
disposeAll(): void {
|
||||
for (const entry of this.sessions.values()) {
|
||||
if (entry.session.isActive) {
|
||||
entry.session.abort();
|
||||
}
|
||||
}
|
||||
this.sessions.clear();
|
||||
this.aliases.clear();
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Internal
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
private evictIfNeeded(): void {
|
||||
if (this.sessions.size < this.maxSessions) return;
|
||||
|
||||
// Find the oldest idle session to evict
|
||||
let oldest: { id: string; at: number } | null = null;
|
||||
for (const [id, entry] of this.sessions) {
|
||||
if (entry.session.isActive) continue; // don't evict active sessions
|
||||
if (!oldest || entry.lastActiveAt < oldest.at) {
|
||||
oldest = { id, at: entry.lastActiveAt };
|
||||
}
|
||||
}
|
||||
|
||||
if (oldest) {
|
||||
this.sessions.delete(oldest.id);
|
||||
// Clean up aliases pointing to the evicted session
|
||||
for (const [alias, target] of this.aliases) {
|
||||
if (target === oldest.id) this.aliases.delete(alias);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
370
extensions/plannotator/generated/ai/types.ts
Normal file
370
extensions/plannotator/generated/ai/types.ts
Normal file
@@ -0,0 +1,370 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/ai/types.ts
|
||||
/**
|
||||
* Core types for the Plannotator AI provider layer.
|
||||
*
|
||||
* This module defines the abstract interfaces that any agent runtime
|
||||
* (Claude Agent SDK, OpenCode, future providers) must implement to
|
||||
* power AI features inside Plannotator's plan review and code review UIs.
|
||||
*/
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Context — what the AI session knows about
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** The surface the user is interacting with when they invoke AI. */
|
||||
export type AIContextMode = "plan-review" | "code-review" | "annotate";
|
||||
|
||||
/**
|
||||
* Describes the parent agent session that originally produced the plan or diff.
|
||||
* Used to fork conversations with full history.
|
||||
*/
|
||||
export interface ParentSession {
|
||||
/** Session ID from the host agent (e.g. Claude Code session UUID). */
|
||||
sessionId: string;
|
||||
/** Working directory the parent session was running in. */
|
||||
cwd: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Snapshot of plan-review-specific context.
|
||||
* Passed when AIContextMode is "plan-review".
|
||||
*/
|
||||
export interface PlanContext {
|
||||
/** The full plan markdown as submitted by the agent. */
|
||||
plan: string;
|
||||
/** Previous plan version (if this is a resubmission). */
|
||||
previousPlan?: string;
|
||||
/** The version number in the plan's history. */
|
||||
version?: number;
|
||||
/** Annotations the user has made so far (serialised for the prompt). */
|
||||
annotations?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Snapshot of code-review-specific context.
|
||||
* Passed when AIContextMode is "code-review".
|
||||
*/
|
||||
export interface CodeReviewContext {
|
||||
/** The unified diff patch. */
|
||||
patch: string;
|
||||
/** The specific file being discussed (if scoped). */
|
||||
filePath?: string;
|
||||
/** The line range being discussed (if scoped). */
|
||||
lineRange?: { start: number; end: number; side: "old" | "new" };
|
||||
/** The code snippet being discussed (if scoped). */
|
||||
selectedCode?: string;
|
||||
/** Summary of annotations the user has made. */
|
||||
annotations?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Snapshot of annotate-mode context.
|
||||
* Passed when AIContextMode is "annotate".
|
||||
*/
|
||||
export interface AnnotateContext {
|
||||
/** The markdown file content being annotated. */
|
||||
content: string;
|
||||
/** Path to the file on disk. */
|
||||
filePath: string;
|
||||
/** Summary of annotations the user has made. */
|
||||
annotations?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Union of mode-specific contexts, discriminated by `mode`.
|
||||
*/
|
||||
export type AIContext =
|
||||
| { mode: "plan-review"; plan: PlanContext; parent?: ParentSession }
|
||||
| { mode: "code-review"; review: CodeReviewContext; parent?: ParentSession }
|
||||
| { mode: "annotate"; annotate: AnnotateContext; parent?: ParentSession };
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Messages — what streams back from the AI
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface AITextMessage {
|
||||
type: "text";
|
||||
text: string;
|
||||
}
|
||||
|
||||
export interface AITextDeltaMessage {
|
||||
type: "text_delta";
|
||||
delta: string;
|
||||
}
|
||||
|
||||
export interface AIToolUseMessage {
|
||||
type: "tool_use";
|
||||
toolName: string;
|
||||
toolInput: Record<string, unknown>;
|
||||
toolUseId: string;
|
||||
}
|
||||
|
||||
export interface AIToolResultMessage {
|
||||
type: "tool_result";
|
||||
toolUseId?: string;
|
||||
result: string;
|
||||
}
|
||||
|
||||
export interface AIErrorMessage {
|
||||
type: "error";
|
||||
error: string;
|
||||
code?: string;
|
||||
}
|
||||
|
||||
export interface AIResultMessage {
|
||||
type: "result";
|
||||
sessionId: string;
|
||||
success: boolean;
|
||||
/** The final text result (if success). */
|
||||
result?: string;
|
||||
/** Total cost in USD (if available). */
|
||||
costUsd?: number;
|
||||
/** Number of agentic turns used. */
|
||||
turns?: number;
|
||||
}
|
||||
|
||||
export interface AIPermissionRequestMessage {
|
||||
type: "permission_request";
|
||||
requestId: string;
|
||||
toolName: string;
|
||||
toolInput: Record<string, unknown>;
|
||||
title?: string;
|
||||
displayName?: string;
|
||||
description?: string;
|
||||
toolUseId: string;
|
||||
}
|
||||
|
||||
export interface AIUnknownMessage {
|
||||
type: "unknown";
|
||||
/** The raw message from the provider, for debugging/transparency. */
|
||||
raw: Record<string, unknown>;
|
||||
}
|
||||
|
||||
export type AIMessage =
|
||||
| AITextMessage
|
||||
| AITextDeltaMessage
|
||||
| AIToolUseMessage
|
||||
| AIToolResultMessage
|
||||
| AIErrorMessage
|
||||
| AIResultMessage
|
||||
| AIPermissionRequestMessage
|
||||
| AIUnknownMessage;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Session — a live conversation with the AI
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface AISession {
|
||||
/** Unique identifier for this session. */
|
||||
readonly id: string;
|
||||
|
||||
/**
|
||||
* The parent session this was forked from, if any.
|
||||
* Null for fresh sessions.
|
||||
*/
|
||||
readonly parentSessionId: string | null;
|
||||
|
||||
/**
|
||||
* Send a prompt and stream back messages.
|
||||
* The returned async iterable yields messages as they arrive.
|
||||
*/
|
||||
query(prompt: string): AsyncIterable<AIMessage>;
|
||||
|
||||
/**
|
||||
* Abort the current in-flight query.
|
||||
* Safe to call if no query is running (no-op).
|
||||
*/
|
||||
abort(): void;
|
||||
|
||||
/** Whether a query is currently in progress. */
|
||||
readonly isActive: boolean;
|
||||
|
||||
/**
|
||||
* Respond to a permission request from the provider.
|
||||
* Called when the user approves or denies a tool use in the UI.
|
||||
*/
|
||||
respondToPermission?(requestId: string, allow: boolean, message?: string): void;
|
||||
|
||||
/**
|
||||
* Callback invoked when the real session ID is resolved from the provider.
|
||||
* Set by the SessionManager to remap its internal tracking key.
|
||||
*/
|
||||
onIdResolved?: (oldId: string, newId: string) => void;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Provider — the pluggable backend
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface AIProviderCapabilities {
|
||||
/** Whether the provider supports forking from a parent session. */
|
||||
fork: boolean;
|
||||
/** Whether the provider supports resuming a prior session by ID. */
|
||||
resume: boolean;
|
||||
/** Whether the provider streams partial text deltas. */
|
||||
streaming: boolean;
|
||||
/** Whether the provider can execute tools (read files, search, etc.). */
|
||||
tools: boolean;
|
||||
}
|
||||
|
||||
export interface CreateSessionOptions {
|
||||
/** The context (plan, diff, file) to seed the session with. */
|
||||
context: AIContext;
|
||||
/**
|
||||
* Working directory override for the agent session.
|
||||
* Falls back to the provider's configured cwd if omitted.
|
||||
*/
|
||||
cwd?: string;
|
||||
/**
|
||||
* Model override. Provider-specific string.
|
||||
* Falls back to provider default if omitted.
|
||||
*/
|
||||
model?: string;
|
||||
/**
|
||||
* Maximum agentic turns for the session.
|
||||
* Keeps inline chat cost-bounded.
|
||||
*/
|
||||
maxTurns?: number;
|
||||
/**
|
||||
* Maximum budget in USD for this session.
|
||||
*/
|
||||
maxBudgetUsd?: number;
|
||||
/**
|
||||
* Reasoning effort level (Codex only).
|
||||
* Controls how much thinking the model does before responding.
|
||||
*/
|
||||
reasoningEffort?: "minimal" | "low" | "medium" | "high" | "xhigh";
|
||||
}
|
||||
|
||||
/**
|
||||
* An AI provider implements the bridge between Plannotator and a specific
|
||||
* agent runtime. The provider is responsible for:
|
||||
*
|
||||
* 1. Creating new AI sessions seeded with review context
|
||||
* 2. Forking from parent agent sessions to maintain conversation history
|
||||
* 3. Streaming responses back as AIMessage events
|
||||
*
|
||||
* Providers are registered by name and selected at runtime based on the
|
||||
* host environment (Claude Code → "claude-agent-sdk", OpenCode → "opencode-sdk").
|
||||
*/
|
||||
export interface AIProvider {
|
||||
/** Unique name for this provider (e.g. "claude-agent-sdk"). */
|
||||
readonly name: string;
|
||||
|
||||
/** What this provider can do. */
|
||||
readonly capabilities: AIProviderCapabilities;
|
||||
|
||||
/** Available models for this provider. */
|
||||
readonly models?: ReadonlyArray<{ id: string; label: string; default?: boolean }>;
|
||||
|
||||
/**
|
||||
* Create a fresh session (no parent history).
|
||||
* Context is injected via the system prompt.
|
||||
*/
|
||||
createSession(options: CreateSessionOptions): Promise<AISession>;
|
||||
|
||||
/**
|
||||
* Fork from a parent agent session.
|
||||
*
|
||||
* The new session inherits the parent's full conversation history
|
||||
* (files read, analysis performed, decisions made) and additionally
|
||||
* receives the Plannotator review context. This enables the user to
|
||||
* ask contextual questions like "why did you change this function?"
|
||||
* without the AI losing insight.
|
||||
*
|
||||
* Providers that don't support real forking MUST throw. The endpoint
|
||||
* layer checks `capabilities.fork` before calling this, so it should
|
||||
* only be reached by providers that genuinely support history inheritance.
|
||||
*/
|
||||
forkSession(options: CreateSessionOptions): Promise<AISession>;
|
||||
|
||||
/**
|
||||
* Resume a previously created Plannotator AI session by its ID.
|
||||
* Used when the user returns to a conversation they started earlier.
|
||||
*
|
||||
* If the provider doesn't support resuming, this should throw.
|
||||
*/
|
||||
resumeSession(sessionId: string): Promise<AISession>;
|
||||
|
||||
/**
|
||||
* Clean up any resources held by the provider.
|
||||
* Called when the server shuts down.
|
||||
*/
|
||||
dispose(): void;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Provider configuration
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Configuration passed to a provider factory.
|
||||
* Each provider type may extend this with its own fields.
|
||||
*/
|
||||
export interface AIProviderConfig {
|
||||
/** Provider type identifier (matches AIProvider.name). */
|
||||
type: string;
|
||||
/** Working directory for the agent. */
|
||||
cwd?: string;
|
||||
/** Default model to use. */
|
||||
model?: string;
|
||||
}
|
||||
|
||||
export interface ClaudeAgentSDKConfig extends AIProviderConfig {
|
||||
type: "claude-agent-sdk";
|
||||
/**
|
||||
* Tools the AI session is allowed to use.
|
||||
* Defaults to read-only tools for safety in inline chat.
|
||||
*/
|
||||
allowedTools?: string[];
|
||||
/**
|
||||
* Permission mode for the session.
|
||||
* Defaults to "default" (inherits user's existing permission rules).
|
||||
*/
|
||||
permissionMode?: "default" | "plan" | "bypassPermissions";
|
||||
/**
|
||||
* Explicit path to the claude CLI binary.
|
||||
* Required when running inside a compiled binary where PATH resolution
|
||||
* doesn't work the same way (e.g., bun build --compile).
|
||||
*/
|
||||
claudeExecutablePath?: string;
|
||||
/**
|
||||
* Setting sources to load permission rules from.
|
||||
* Loads user's existing Claude Code permission rules so inline chat
|
||||
* inherits what they've already approved.
|
||||
*/
|
||||
settingSources?: string[];
|
||||
}
|
||||
|
||||
export interface CodexSDKConfig extends AIProviderConfig {
|
||||
type: "codex-sdk";
|
||||
/**
|
||||
* Sandbox mode controls what the Codex agent can do.
|
||||
* Defaults to "read-only" for safety in inline chat.
|
||||
*/
|
||||
sandboxMode?: "read-only" | "workspace-write" | "danger-full-access";
|
||||
/**
|
||||
* Explicit path to the codex CLI binary.
|
||||
* Required when running inside a compiled binary where PATH resolution
|
||||
* doesn't work the same way (e.g., bun build --compile).
|
||||
*/
|
||||
codexExecutablePath?: string;
|
||||
}
|
||||
|
||||
export interface PiSDKConfig extends AIProviderConfig {
|
||||
type: "pi-sdk";
|
||||
/**
|
||||
* Explicit path to the pi CLI binary.
|
||||
* Required when running inside a compiled binary where PATH resolution
|
||||
* doesn't work the same way (e.g., bun build --compile).
|
||||
*/
|
||||
piExecutablePath?: string;
|
||||
}
|
||||
|
||||
export interface OpenCodeConfig extends AIProviderConfig {
|
||||
type: "opencode-sdk";
|
||||
/** Hostname for the OpenCode server. Default: "127.0.0.1". */
|
||||
hostname?: string;
|
||||
/** Port for the OpenCode server. Default: 4096. */
|
||||
port?: number;
|
||||
}
|
||||
104
extensions/plannotator/generated/annotate-args.ts
Normal file
104
extensions/plannotator/generated/annotate-args.ts
Normal file
@@ -0,0 +1,104 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/annotate-args.ts
|
||||
/**
|
||||
* Parse CLI-style args arriving as a single whitespace-delimited string.
|
||||
*
|
||||
* Extracts the `--gate`, `--json`, and `--hook` flags (issue #570)
|
||||
* from the remainder, which is treated as the target path. Leading `@` is
|
||||
* stripped via the shared at-reference helper — reference-mode is primary.
|
||||
* Scoped-package-style literal `@` paths are handled by a fallback that the
|
||||
* downstream resolver opts into (see at-reference.ts).
|
||||
*
|
||||
* Used by the OpenCode plugin and Pi extension, where the whole args string
|
||||
* arrives pre-joined from the harness slash-command dispatcher. The Claude
|
||||
* Code binary parses argv directly with indexOf/splice and does not use
|
||||
* this helper.
|
||||
*
|
||||
* Implementation: walks the raw string once, preserving whitespace runs and
|
||||
* non-whitespace tokens as separate segments. Only known flag tokens
|
||||
* (whole-word match) plus one adjacent whitespace run are removed.
|
||||
* This keeps double-spaces and tabs inside file paths intact — which
|
||||
* matches the pre-PR behavior on `main`, where OpenCode and Pi passed
|
||||
* the raw args string straight through to the filesystem resolver.
|
||||
*
|
||||
* Remaining edge: if a path literally contains a known flag as a standalone
|
||||
* whitespace-separated token (e.g. `"Feature --gate spec.md"`), that token
|
||||
* is stripped. Supporting this would need shell-style quoting, which isn't
|
||||
* worth the complexity for a vanishingly rare naming pattern.
|
||||
*/
|
||||
|
||||
import { stripAtPrefix } from "./at-reference";
|
||||
import { stripWrappingQuotes } from "./resolve-file";
|
||||
|
||||
export interface ParsedAnnotateArgs {
|
||||
/**
|
||||
* Primary resolution path with any leading `@` stripped (reference-mode
|
||||
* convention). Most call sites should use this directly.
|
||||
*/
|
||||
filePath: string;
|
||||
/**
|
||||
* Raw path with the `@` prefix preserved (if the user supplied one).
|
||||
* Callers that want the literal-`@` fallback for scoped-package-style
|
||||
* paths pair this with `resolveAtReference` from at-reference.ts.
|
||||
*/
|
||||
rawFilePath: string;
|
||||
gate: boolean;
|
||||
json: boolean;
|
||||
hook: boolean;
|
||||
}
|
||||
|
||||
type Segment = { type: "ws" | "tok"; text: string };
|
||||
|
||||
const FLAG_MAP = {
|
||||
"--gate": "gate",
|
||||
"--json": "json",
|
||||
"--hook": "hook",
|
||||
} as const satisfies Record<string, keyof Omit<ParsedAnnotateArgs, "filePath" | "rawFilePath">>;
|
||||
|
||||
export function parseAnnotateArgs(raw: string): ParsedAnnotateArgs {
|
||||
const s = (raw ?? "").trim();
|
||||
const flags = { gate: false, json: false, hook: false };
|
||||
|
||||
const segments: Segment[] = [];
|
||||
for (let i = 0; i < s.length;) {
|
||||
const isWs = /\s/.test(s[i]);
|
||||
const start = i;
|
||||
while (i < s.length && /\s/.test(s[i]) === isWs) i++;
|
||||
segments.push({ type: isWs ? "ws" : "tok", text: s.slice(start, i) });
|
||||
}
|
||||
|
||||
const keep = segments.map(() => true);
|
||||
for (let j = 0; j < segments.length; j++) {
|
||||
const seg = segments[j];
|
||||
if (seg.type !== "tok") continue;
|
||||
const key = FLAG_MAP[seg.text as keyof typeof FLAG_MAP];
|
||||
if (!key) continue;
|
||||
|
||||
flags[key] = true;
|
||||
keep[j] = false;
|
||||
|
||||
// Drop one adjacent whitespace run so removed flags don't leave dangling
|
||||
// spaces. Prefer trailing whitespace; fall back to leading if at the end.
|
||||
if (j + 1 < segments.length && segments[j + 1].type === "ws") {
|
||||
keep[j + 1] = false;
|
||||
} else if (j > 0 && segments[j - 1].type === "ws") {
|
||||
keep[j - 1] = false;
|
||||
}
|
||||
}
|
||||
|
||||
// Trim covers the case where two adjacent flags (`... --gate --json`)
|
||||
// both claim the single whitespace between them, leaving a trailing space
|
||||
// after the kept token. Wrapping quotes come from OpenCode/Pi users who
|
||||
// quote paths with spaces (shell muscle memory); strip them here so
|
||||
// downstream callers never see tokenization artifacts.
|
||||
const rawFilePath = stripWrappingQuotes(
|
||||
segments
|
||||
.filter((_, j) => keep[j])
|
||||
.map((seg) => seg.text)
|
||||
.join("")
|
||||
.trim(),
|
||||
);
|
||||
|
||||
if (flags.hook) flags.gate = true;
|
||||
|
||||
return { filePath: stripAtPrefix(rawFilePath), rawFilePath, ...flags };
|
||||
}
|
||||
53
extensions/plannotator/generated/at-reference.ts
Normal file
53
extensions/plannotator/generated/at-reference.ts
Normal file
@@ -0,0 +1,53 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/at-reference.ts
|
||||
/**
|
||||
* `@`-reference handling for user-provided paths.
|
||||
*
|
||||
* Several agent harnesses (Claude Code, OpenCode, Pi) let users reference
|
||||
* files with an `@` prefix, e.g. `@README.md`. The `@` is the team's
|
||||
* reference marker, not part of the filename. Stripping it is the primary
|
||||
* resolution path — that's the common case and it's supported first-class.
|
||||
*
|
||||
* The secondary path handles scoped-package-style names like
|
||||
* `@scope/pkg/README.md`: if the stripped form doesn't resolve, fall back
|
||||
* to the literal form so those paths still open.
|
||||
*
|
||||
* Both functions are pure and take any filesystem-ish predicate via a
|
||||
* callback, so they're trivial to unit-test without stubbing anything.
|
||||
*/
|
||||
|
||||
import { stripWrappingQuotes } from "./resolve-file";
|
||||
|
||||
/**
|
||||
* Normalize a user-typed path reference by unwrapping matching `"..."` or
|
||||
* `'...'` quotes and removing a single leading `@`. Quotes come from
|
||||
* harnesses that tokenize on whitespace (OpenCode, Pi), where paths
|
||||
* containing spaces have to be quoted. The quote-stripping has to run
|
||||
* first so the `@` check sees the real first character.
|
||||
*
|
||||
* Non-`@` inputs are returned unchanged except for quote unwrapping.
|
||||
* Does not recurse: `@@foo` becomes `@foo`, not `foo`.
|
||||
*/
|
||||
export function stripAtPrefix(input: string): string {
|
||||
const unquoted = stripWrappingQuotes(input);
|
||||
return unquoted.startsWith("@") ? unquoted.slice(1) : unquoted;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve an `@`-prefixed user input by trying the stripped form first
|
||||
* (reference mode, primary) and falling back to the literal form if the
|
||||
* stripped form doesn't resolve. Returns the candidate that resolves, or
|
||||
* null if neither does.
|
||||
*
|
||||
* `exists` defines what "resolves" means — use `existsSync` for a bare
|
||||
* filesystem check, or wrap `resolveMarkdownFile` / `statSync` for richer
|
||||
* predicates. The helper itself is filesystem-agnostic.
|
||||
*/
|
||||
export function resolveAtReference(
|
||||
input: string,
|
||||
exists: (candidate: string) => boolean,
|
||||
): string | null {
|
||||
const stripped = stripAtPrefix(input);
|
||||
if (exists(stripped)) return stripped;
|
||||
if (stripped !== input && exists(input)) return input;
|
||||
return null;
|
||||
}
|
||||
53
extensions/plannotator/generated/checklist.ts
Normal file
53
extensions/plannotator/generated/checklist.ts
Normal file
@@ -0,0 +1,53 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/checklist.ts
|
||||
/**
|
||||
* Checklist parsing and progress tracking utilities.
|
||||
*
|
||||
* Shared between Pi extension and OpenCode plugin for plan execution tracking.
|
||||
*/
|
||||
|
||||
export interface ChecklistItem {
|
||||
/** 1-based step number, compatible with markCompletedSteps/extractDoneSteps. */
|
||||
step: number;
|
||||
text: string;
|
||||
completed: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse standard markdown checkboxes from file content.
|
||||
*
|
||||
* Matches lines like:
|
||||
* - [ ] Step description
|
||||
* - [x] Completed step
|
||||
* * [ ] Alternative bullet
|
||||
*/
|
||||
export function parseChecklist(content: string): ChecklistItem[] {
|
||||
const items: ChecklistItem[] = [];
|
||||
const pattern = /^[-*]\s*\[([ xX])\]\s+(.+)$/gm;
|
||||
|
||||
for (const match of content.matchAll(pattern)) {
|
||||
const completed = match[1] !== " ";
|
||||
const text = match[2].trim();
|
||||
if (text.length > 0) {
|
||||
items.push({ step: items.length + 1, text, completed });
|
||||
}
|
||||
}
|
||||
return items;
|
||||
}
|
||||
|
||||
export function extractDoneSteps(message: string): number[] {
|
||||
const steps: number[] = [];
|
||||
for (const match of message.matchAll(/\[DONE:(\d+)\]/gi)) {
|
||||
const step = Number(match[1]);
|
||||
if (Number.isFinite(step)) steps.push(step);
|
||||
}
|
||||
return steps;
|
||||
}
|
||||
|
||||
export function markCompletedSteps(text: string, items: ChecklistItem[]): number {
|
||||
const doneSteps = extractDoneSteps(text);
|
||||
for (const step of doneSteps) {
|
||||
const item = items.find((t) => t.step === step);
|
||||
if (item) item.completed = true;
|
||||
}
|
||||
return doneSteps.length;
|
||||
}
|
||||
346
extensions/plannotator/generated/claude-review.ts
Normal file
346
extensions/plannotator/generated/claude-review.ts
Normal file
@@ -0,0 +1,346 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/server/claude-review.ts
|
||||
import { toRelativePath } from "./path-utils.js";
|
||||
|
||||
/**
|
||||
* Claude Code Review Agent — prompt, command builder, and JSONL output parser.
|
||||
*
|
||||
* Claude has its own review model (severity-based findings with reasoning traces)
|
||||
* separate from Codex's priority-based model. The transform layer normalizes
|
||||
* both into the shared annotation format.
|
||||
*
|
||||
* Claude uses --json-schema (inline JSON + Ajv validation with retries) and
|
||||
* --output-format stream-json for live JSONL streaming. The final event is
|
||||
* type:"result" with structured_output containing validated findings.
|
||||
*/
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export type ClaudeSeverity = "important" | "nit" | "pre_existing";
|
||||
|
||||
export interface ClaudeFinding {
|
||||
severity: ClaudeSeverity;
|
||||
file: string;
|
||||
line: number;
|
||||
end_line: number;
|
||||
description: string;
|
||||
reasoning: string;
|
||||
}
|
||||
|
||||
export interface ClaudeReviewOutput {
|
||||
findings: ClaudeFinding[];
|
||||
summary: {
|
||||
important: number;
|
||||
nit: number;
|
||||
pre_existing: number;
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Schema — Claude's own severity-based model
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const CLAUDE_REVIEW_SCHEMA_JSON = JSON.stringify({
|
||||
type: "object",
|
||||
properties: {
|
||||
findings: {
|
||||
type: "array",
|
||||
items: {
|
||||
type: "object",
|
||||
properties: {
|
||||
severity: { type: "string", enum: ["important", "nit", "pre_existing"] },
|
||||
file: { type: "string" },
|
||||
line: { type: "integer" },
|
||||
end_line: { type: "integer" },
|
||||
description: { type: "string" },
|
||||
reasoning: { type: "string" },
|
||||
},
|
||||
required: ["severity", "file", "line", "end_line", "description", "reasoning"],
|
||||
additionalProperties: false,
|
||||
},
|
||||
},
|
||||
summary: {
|
||||
type: "object",
|
||||
properties: {
|
||||
important: { type: "integer" },
|
||||
nit: { type: "integer" },
|
||||
pre_existing: { type: "integer" },
|
||||
},
|
||||
required: ["important", "nit", "pre_existing"],
|
||||
additionalProperties: false,
|
||||
},
|
||||
},
|
||||
required: ["findings", "summary"],
|
||||
additionalProperties: false,
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Review prompt — converges open-source Claude Code review + remote service
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const CLAUDE_REVIEW_PROMPT = `# Claude Code Review System Prompt
|
||||
|
||||
## Identity
|
||||
You are a code review system. Your job is to find bugs that would break
|
||||
production. You are not a linter, formatter, or style checker unless
|
||||
project guidance files explicitly expand your scope.
|
||||
|
||||
## Pipeline
|
||||
|
||||
Step 1: Gather context
|
||||
- Retrieve the PR diff (gh pr diff or git diff)
|
||||
- Read CLAUDE.md and REVIEW.md at the repo root and in every directory
|
||||
containing modified files
|
||||
- Build a map of which rules apply to which file paths
|
||||
- Identify any skip rules (paths, patterns, or file types to ignore)
|
||||
|
||||
Step 2: Launch 4 parallel review agents
|
||||
|
||||
Agent 1 — Bug + Regression (Opus-level reasoning)
|
||||
Scan for logic errors, regressions, broken edge cases, build failures,
|
||||
and code that will produce wrong results. Focus on the diff but read
|
||||
surrounding code to understand call sites and data flow. Flag only
|
||||
issues where the code is demonstrably wrong — not stylistic concerns,
|
||||
not missing tests, not "could be cleaner."
|
||||
|
||||
Agent 2 — Security + Deep Analysis (Opus-level reasoning)
|
||||
Look for security vulnerabilities with concrete exploit paths, race
|
||||
conditions, incorrect assumptions about trust boundaries, and subtle
|
||||
issues in introduced code. Read surrounding code for context. Do not
|
||||
flag theoretical risks without a plausible path to harm.
|
||||
|
||||
Agent 3 — Code Quality + Reusability (Sonnet-level reasoning)
|
||||
Look for code smells, unnecessary duplication, missed opportunities to
|
||||
reuse existing utilities or patterns in the codebase, overly complex
|
||||
implementations that could be simpler, and elegance issues. Read the
|
||||
surrounding codebase to understand existing patterns before flagging.
|
||||
Only flag issues a senior engineer would care about.
|
||||
|
||||
Agent 4 — Guideline Compliance (Haiku-level reasoning)
|
||||
Audit changes against rules from CLAUDE.md and REVIEW.md gathered in
|
||||
Step 1. Only flag clear, unambiguous violations where you can cite the
|
||||
exact rule broken. If a PR makes a CLAUDE.md statement outdated, flag
|
||||
that the docs need updating. Respect all skip rules — never flag files
|
||||
or patterns that guidance says to ignore.
|
||||
|
||||
All agents:
|
||||
- Do not duplicate each other's findings
|
||||
- Do not flag issues in paths excluded by guidance files
|
||||
- Provide file, line number, and a concise description for each candidate
|
||||
|
||||
Step 3: Validate each candidate finding
|
||||
For each candidate, launch a validation agent. The validator:
|
||||
- Traces the actual code path to confirm the issue is real
|
||||
- Checks whether the issue is handled elsewhere (try/catch, upstream
|
||||
guard, fallback logic, type system guarantees)
|
||||
- Confirms the finding is not a false positive with high confidence
|
||||
- If validation fails, drop the finding silently
|
||||
- If validation passes, write a clear reasoning chain explaining how
|
||||
the issue was confirmed — this becomes the \`reasoning\` field
|
||||
|
||||
Step 4: Classify each validated finding
|
||||
Assign exactly one severity:
|
||||
|
||||
important — A bug that should be fixed before merging. Build failures,
|
||||
clear logic errors, security vulnerabilities with exploit paths, data
|
||||
loss risks, race conditions with observable consequences.
|
||||
|
||||
nit — A minor issue worth fixing but non-blocking. Style deviations
|
||||
from project guidelines, code quality concerns, edge cases that are
|
||||
unlikely but worth noting, convention violations that don't affect
|
||||
correctness.
|
||||
|
||||
pre_existing — A bug that exists in the surrounding codebase but was
|
||||
NOT introduced by this PR. Only flag when directly relevant to the
|
||||
changed code path.
|
||||
|
||||
Step 5: Deduplicate and rank
|
||||
- Merge findings that describe the same underlying issue from different
|
||||
agents — keep the most specific description and the highest severity
|
||||
- Sort by severity: important → nit → pre_existing
|
||||
- Within each severity, sort by file path and line number
|
||||
|
||||
Step 6: Return structured JSON output matching the schema.
|
||||
If no issues are found, return an empty findings array with zeroed summary.
|
||||
|
||||
## Hard constraints
|
||||
- Never approve or block the PR
|
||||
- Never comment on formatting or code style unless guidance files say to
|
||||
- Never flag missing test coverage unless guidance files say to
|
||||
- Never invent rules — only enforce what CLAUDE.md or REVIEW.md state
|
||||
- Never flag issues in skipped paths or generated files unless guidance
|
||||
explicitly includes them
|
||||
- Prefer silence over false positives — when in doubt, drop the finding
|
||||
- Do NOT post any comments to GitHub or GitLab
|
||||
- Do NOT use gh pr comment or any commenting tool
|
||||
- Your only output is the structured JSON findings`;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Command builder
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface ClaudeCommandResult {
|
||||
command: string[];
|
||||
/** Prompt text to write to stdin (Claude reads prompt from stdin, not argv). */
|
||||
stdinPrompt: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build the `claude -p` command. Prompt is passed via stdin, not as a
|
||||
* positional arg — avoids quoting issues, argv limits, and variadic flag conflicts.
|
||||
*/
|
||||
export function buildClaudeCommand(prompt: string, model: string = "claude-opus-4-7", effort?: string): ClaudeCommandResult {
|
||||
const allowedTools = [
|
||||
"Agent", "Read", "Glob", "Grep",
|
||||
// GitHub CLI
|
||||
"Bash(gh pr view:*)", "Bash(gh pr diff:*)", "Bash(gh pr list:*)",
|
||||
"Bash(gh issue view:*)", "Bash(gh issue list:*)",
|
||||
"Bash(gh api repos/*/*/pulls/*)", "Bash(gh api repos/*/*/pulls/*/files*)",
|
||||
"Bash(gh api repos/*/*/pulls/*/comments*)", "Bash(gh api repos/*/*/issues/*/comments*)",
|
||||
// GitLab CLI
|
||||
"Bash(glab mr view:*)", "Bash(glab mr diff:*)", "Bash(glab mr list:*)",
|
||||
"Bash(glab api:*)",
|
||||
// Git (read-only)
|
||||
"Bash(git status:*)", "Bash(git diff:*)", "Bash(git log:*)",
|
||||
"Bash(git show:*)", "Bash(git blame:*)", "Bash(git branch:*)",
|
||||
"Bash(git grep:*)", "Bash(git ls-remote:*)", "Bash(git ls-tree:*)",
|
||||
"Bash(git merge-base:*)", "Bash(git remote:*)", "Bash(git rev-parse:*)",
|
||||
"Bash(git show-ref:*)",
|
||||
"Bash(wc:*)",
|
||||
].join(",");
|
||||
|
||||
const disallowedTools = [
|
||||
"Edit", "Write", "NotebookEdit", "WebFetch", "WebSearch",
|
||||
"Bash(python:*)", "Bash(python3:*)", "Bash(node:*)", "Bash(npx:*)",
|
||||
"Bash(bun:*)", "Bash(bunx:*)", "Bash(sh:*)", "Bash(bash:*)", "Bash(zsh:*)",
|
||||
"Bash(curl:*)", "Bash(wget:*)",
|
||||
].join(",");
|
||||
|
||||
return {
|
||||
command: [
|
||||
"claude", "-p",
|
||||
"--permission-mode", "dontAsk",
|
||||
"--output-format", "stream-json",
|
||||
"--verbose",
|
||||
"--json-schema", CLAUDE_REVIEW_SCHEMA_JSON,
|
||||
"--no-session-persistence",
|
||||
"--model", model,
|
||||
...(effort ? ["--effort", effort] : []),
|
||||
"--tools", "Agent,Bash,Read,Glob,Grep",
|
||||
"--allowedTools", allowedTools,
|
||||
"--disallowedTools", disallowedTools,
|
||||
],
|
||||
stdinPrompt: prompt,
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// JSONL stream output parser
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Parse Claude Code's stream-json output (JSONL).
|
||||
* Extracts structured_output from the final type:"result" event.
|
||||
*/
|
||||
export function parseClaudeStreamOutput(stdout: string): ClaudeReviewOutput | null {
|
||||
if (!stdout.trim()) return null;
|
||||
|
||||
const lines = stdout.trim().split('\n');
|
||||
for (let i = lines.length - 1; i >= 0; i--) {
|
||||
const line = lines[i].trim();
|
||||
if (!line) continue;
|
||||
|
||||
try {
|
||||
const event = JSON.parse(line);
|
||||
|
||||
if (event.type === 'result') {
|
||||
if (event.is_error) return null;
|
||||
|
||||
const output = event.structured_output;
|
||||
if (!output || !Array.isArray(output.findings)) return null;
|
||||
|
||||
return output as ClaudeReviewOutput;
|
||||
}
|
||||
} catch {
|
||||
// Not valid JSON — skip
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Finding transform — Claude findings → external annotations
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Transform Claude findings into the external annotation format. */
|
||||
export function transformClaudeFindings(
|
||||
findings: ClaudeFinding[],
|
||||
source: string,
|
||||
cwd?: string,
|
||||
): Array<{
|
||||
source: string;
|
||||
filePath: string;
|
||||
lineStart: number;
|
||||
lineEnd: number;
|
||||
type: string;
|
||||
side: string;
|
||||
scope: string;
|
||||
text: string;
|
||||
severity: ClaudeSeverity;
|
||||
reasoning: string;
|
||||
author: string;
|
||||
}> {
|
||||
return findings
|
||||
.filter(f => f.file && typeof f.line === "number")
|
||||
.map(f => ({
|
||||
source,
|
||||
filePath: toRelativePath(f.file, cwd),
|
||||
lineStart: f.line,
|
||||
lineEnd: f.end_line ?? f.line,
|
||||
type: "comment",
|
||||
side: "new",
|
||||
scope: "line",
|
||||
text: `[${f.severity}] ${f.description}`,
|
||||
severity: f.severity,
|
||||
reasoning: f.reasoning,
|
||||
author: "Claude Code",
|
||||
}));
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Live log formatter
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Extract log-worthy content from a JSONL line for the LiveLogViewer.
|
||||
* Returns a human-readable string, or null if the line should be skipped.
|
||||
*/
|
||||
export function formatClaudeLogEvent(line: string): string | null {
|
||||
try {
|
||||
const event = JSON.parse(line);
|
||||
|
||||
// Skip the final result event — handled separately
|
||||
if (event.type === 'result') return null;
|
||||
|
||||
// Assistant messages (the agent's thinking/responses)
|
||||
if (event.type === 'assistant' && event.message?.content) {
|
||||
const parts = Array.isArray(event.message.content) ? event.message.content : [event.message.content];
|
||||
const texts = parts
|
||||
.filter((p: any) => p.type === 'text' && p.text)
|
||||
.map((p: any) => p.text);
|
||||
if (texts.length > 0) return texts.join('\n');
|
||||
|
||||
// Tool use events (only reached if no text parts found)
|
||||
const tools = parts.filter((p: any) => p.type === 'tool_use');
|
||||
if (tools.length > 0) {
|
||||
return tools.map((t: any) => `[${t.name}] ${typeof t.input === 'string' ? t.input.slice(0, 100) : JSON.stringify(t.input).slice(0, 100)}`).join('\n');
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
20
extensions/plannotator/generated/code-file.ts
Normal file
20
extensions/plannotator/generated/code-file.ts
Normal file
@@ -0,0 +1,20 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/code-file.ts
|
||||
export const CODE_FILE_REGEX = /(?:\.(tsx?|jsx?|py|rb|go|rs|java|c|cpp|h|hpp|cs|swift|kt|scala|sh|bash|zsh|sql|graphql|json|ya?ml|toml|ini|css|scss|less|xml|tf|lua|r|dart|ex|exs|vue|svelte|astro|zig|proto)|(?:^|\/)(Dockerfile|Makefile|Rakefile|Gemfile|Procfile|Vagrantfile|Brewfile|Justfile))$/i;
|
||||
|
||||
export const CODE_PATH_BARE_REGEX = /(?:\.{0,2}\/)?(?:[a-zA-Z0-9_@.\-\[\]]+\/)+[a-zA-Z0-9_.\-\[\]]+\.[a-zA-Z0-9]+/g;
|
||||
|
||||
const IMPLAUSIBLE_CHARS = /[{},*?\s]/;
|
||||
|
||||
export function isPlausibleCodeFilePath(input: string): boolean {
|
||||
return !IMPLAUSIBLE_CHARS.test(input);
|
||||
}
|
||||
|
||||
export function isCodeFilePath(input: string): boolean {
|
||||
if (!isPlausibleCodeFilePath(input)) return false;
|
||||
return CODE_FILE_REGEX.test(input.replace(/#.*$/, ''))
|
||||
&& !input.startsWith('http://') && !input.startsWith('https://');
|
||||
}
|
||||
|
||||
export function isCodeFilePathStrict(input: string): boolean {
|
||||
return input.includes('/') && isCodeFilePath(input);
|
||||
}
|
||||
408
extensions/plannotator/generated/codex-review.ts
Normal file
408
extensions/plannotator/generated/codex-review.ts
Normal file
@@ -0,0 +1,408 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/server/codex-review.ts
|
||||
/**
|
||||
* Codex Review Agent — prompt, command builder, output parser, and finding transformer.
|
||||
*
|
||||
* Encapsulates all Codex-specific logic for the AI review agent integration.
|
||||
* The review server (review.ts) calls into this module via the agent-jobs callbacks.
|
||||
*/
|
||||
|
||||
import { join } from "node:path";
|
||||
import { homedir, tmpdir } from "node:os";
|
||||
import { appendFile, mkdir, unlink, writeFile, readFile } from "node:fs/promises";
|
||||
import { existsSync } from "node:fs";
|
||||
import type { DiffType } from "./review-core.js";
|
||||
import type { PRMetadata } from "./pr-provider.js";
|
||||
import { toRelativePath } from "./path-utils.js";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Debug log — only active when PLANNOTATOR_DEBUG is set
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const DEBUG_ENABLED = !!process.env.PLANNOTATOR_DEBUG;
|
||||
const DEBUG_LOG_PATH = join(homedir(), ".plannotator", "codex-review-debug.log");
|
||||
|
||||
async function debugLog(label: string, data?: unknown): Promise<void> {
|
||||
if (!DEBUG_ENABLED) return;
|
||||
try {
|
||||
await mkdir(join(homedir(), ".plannotator"), { recursive: true });
|
||||
const timestamp = new Date().toISOString();
|
||||
const line = data !== undefined
|
||||
? `[${timestamp}] ${label}: ${typeof data === "string" ? data : JSON.stringify(data, null, 2)}\n`
|
||||
: `[${timestamp}] ${label}\n`;
|
||||
await appendFile(DEBUG_LOG_PATH, line);
|
||||
} catch { /* never fail the main flow */ }
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Schema — embedded as a string, written to disk on first use.
|
||||
// Bun's compiled binary uses a virtual FS that external processes (codex)
|
||||
// can't read, so we materialize the schema to a real file.
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const CODEX_REVIEW_SCHEMA = JSON.stringify({
|
||||
type: "object",
|
||||
properties: {
|
||||
findings: {
|
||||
type: "array",
|
||||
items: {
|
||||
type: "object",
|
||||
properties: {
|
||||
title: { type: "string" },
|
||||
body: { type: "string" },
|
||||
confidence_score: { type: "number" },
|
||||
priority: { type: ["integer", "null"] },
|
||||
code_location: {
|
||||
type: "object",
|
||||
properties: {
|
||||
absolute_file_path: { type: "string" },
|
||||
line_range: {
|
||||
type: "object",
|
||||
properties: {
|
||||
start: { type: "integer" },
|
||||
end: { type: "integer" },
|
||||
},
|
||||
required: ["start", "end"],
|
||||
additionalProperties: false,
|
||||
},
|
||||
},
|
||||
required: ["absolute_file_path", "line_range"],
|
||||
additionalProperties: false,
|
||||
},
|
||||
},
|
||||
required: ["title", "body", "confidence_score", "priority", "code_location"],
|
||||
additionalProperties: false,
|
||||
},
|
||||
},
|
||||
overall_correctness: { type: "string" },
|
||||
overall_explanation: { type: "string" },
|
||||
overall_confidence_score: { type: "number" },
|
||||
},
|
||||
required: ["findings", "overall_correctness", "overall_explanation", "overall_confidence_score"],
|
||||
additionalProperties: false,
|
||||
});
|
||||
|
||||
const SCHEMA_DIR = join(homedir(), ".plannotator");
|
||||
const SCHEMA_FILE = join(SCHEMA_DIR, "codex-review-schema.json");
|
||||
let schemaMaterialized = false;
|
||||
|
||||
/** Ensure the schema file exists on disk and return its path. */
|
||||
async function ensureSchemaFile(): Promise<string> {
|
||||
if (!schemaMaterialized) {
|
||||
await mkdir(SCHEMA_DIR, { recursive: true });
|
||||
await writeFile(SCHEMA_FILE, CODEX_REVIEW_SCHEMA);
|
||||
schemaMaterialized = true;
|
||||
}
|
||||
return SCHEMA_FILE;
|
||||
}
|
||||
|
||||
export { SCHEMA_FILE as CODEX_REVIEW_SCHEMA_PATH };
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// System prompt — copied verbatim from codex-rs/core/review_prompt.md
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const CODEX_REVIEW_SYSTEM_PROMPT = `# Review guidelines:
|
||||
|
||||
You are acting as a reviewer for a proposed code change made by another engineer.
|
||||
|
||||
Below are some default guidelines for determining whether the original author would appreciate the issue being flagged.
|
||||
|
||||
These are not the final word in determining whether an issue is a bug. In many cases, you will encounter other, more specific guidelines. These may be present elsewhere in a developer message, a user message, a file, or even elsewhere in this system message.
|
||||
Those guidelines should be considered to override these general instructions.
|
||||
|
||||
Here are the general guidelines for determining whether something is a bug and should be flagged.
|
||||
|
||||
1. It meaningfully impacts the accuracy, performance, security, or maintainability of the code.
|
||||
2. The bug is discrete and actionable (i.e. not a general issue with the codebase or a combination of multiple issues).
|
||||
3. Fixing the bug does not demand a level of rigor that is not present in the rest of the codebase (e.g. one doesn't need very detailed comments and input validation in a repository of one-off scripts in personal projects)
|
||||
4. The bug was introduced in the commit (pre-existing bugs should not be flagged).
|
||||
5. The author of the original PR would likely fix the issue if they were made aware of it.
|
||||
6. The bug does not rely on unstated assumptions about the codebase or author's intent.
|
||||
7. It is not enough to speculate that a change may disrupt another part of the codebase, to be considered a bug, one must identify the other parts of the code that are provably affected.
|
||||
8. The bug is clearly not just an intentional change by the original author.
|
||||
|
||||
When flagging a bug, you will also provide an accompanying comment. Once again, these guidelines are not the final word on how to construct a comment -- defer to any subsequent guidelines that you encounter.
|
||||
|
||||
1. The comment should be clear about why the issue is a bug.
|
||||
2. The comment should appropriately communicate the severity of the issue. It should not claim that an issue is more severe than it actually is.
|
||||
3. The comment should be brief. The body should be at most 1 paragraph. It should not introduce line breaks within the natural language flow unless it is necessary for the code fragment.
|
||||
4. The comment should not include any chunks of code longer than 3 lines. Any code chunks should be wrapped in markdown inline code tags or a code block.
|
||||
5. The comment should clearly and explicitly communicate the scenarios, environments, or inputs that are necessary for the bug to arise. The comment should immediately indicate that the issue's severity depends on these factors.
|
||||
6. The comment's tone should be matter-of-fact and not accusatory or overly positive. It should read as a helpful AI assistant suggestion without sounding too much like a human reviewer.
|
||||
7. The comment should be written such that the original author can immediately grasp the idea without close reading.
|
||||
8. The comment should avoid excessive flattery and comments that are not helpful to the original author. The comment should avoid phrasing like "Great job ...", "Thanks for ...".
|
||||
|
||||
Below are some more detailed guidelines that you should apply to this specific review.
|
||||
|
||||
HOW MANY FINDINGS TO RETURN:
|
||||
|
||||
Output all findings that the original author would fix if they knew about it. If there is no finding that a person would definitely love to see and fix, prefer outputting no findings. Do not stop at the first qualifying finding. Continue until you've listed every qualifying finding.
|
||||
|
||||
GUIDELINES:
|
||||
|
||||
- Ignore trivial style unless it obscures meaning or violates documented standards.
|
||||
- Use one comment per distinct issue (or a multi-line range if necessary).
|
||||
- Use \`\`\`suggestion blocks ONLY for concrete replacement code (minimal lines; no commentary inside the block).
|
||||
- In every \`\`\`suggestion block, preserve the exact leading whitespace of the replaced lines (spaces vs tabs, number of spaces).
|
||||
- Do NOT introduce or remove outer indentation levels unless that is the actual fix.
|
||||
|
||||
The comments will be presented in the code review as inline comments. You should avoid providing unnecessary location details in the comment body. Always keep the line range as short as possible for interpreting the issue. Avoid ranges longer than 5–10 lines; instead, choose the most suitable subrange that pinpoints the problem.
|
||||
|
||||
At the beginning of the finding title, tag the bug with priority level. For example "[P1] Un-padding slices along wrong tensor dimensions". [P0] – Drop everything to fix. Blocking release, operations, or major usage. Only use for universal issues that do not depend on any assumptions about the inputs. · [P1] – Urgent. Should be addressed in the next cycle · [P2] – Normal. To be fixed eventually · [P3] – Low. Nice to have.
|
||||
|
||||
Additionally, include a numeric priority field in the JSON output for each finding: set "priority" to 0 for P0, 1 for P1, 2 for P2, or 3 for P3. If a priority cannot be determined, omit the field or use null.
|
||||
|
||||
At the end of your findings, output an "overall correctness" verdict of whether or not the patch should be considered "correct".
|
||||
Correct implies that existing code and tests will not break, and the patch is free of bugs and other blocking issues.
|
||||
Ignore non-blocking issues such as style, formatting, typos, documentation, and other nits.
|
||||
|
||||
FORMATTING GUIDELINES:
|
||||
The finding description should be one paragraph.`;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// User message builder
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Build the dynamic user message based on review context. */
|
||||
export function buildCodexReviewUserMessage(
|
||||
patch: string,
|
||||
diffType: DiffType,
|
||||
options?: { defaultBranch?: string; hasLocalAccess?: boolean; prDiffScope?: string },
|
||||
prMetadata?: PRMetadata,
|
||||
): string {
|
||||
// PR/MR mode — pass the link, with local context if --local
|
||||
if (prMetadata) {
|
||||
if (options?.prDiffScope === "full-stack") {
|
||||
return [
|
||||
`Full-stack review of ${prMetadata.url}`,
|
||||
"",
|
||||
"This is a stacked PR. The diff below shows ALL accumulated changes from the repository default branch through this PR's head (not just this PR's own layer).",
|
||||
"Review the complete diff for issues that span the stack.",
|
||||
"",
|
||||
"```diff",
|
||||
patch,
|
||||
"```",
|
||||
].join("\n");
|
||||
}
|
||||
if (options?.hasLocalAccess) {
|
||||
return [
|
||||
prMetadata.url,
|
||||
"",
|
||||
"You are in a local worktree checked out at the PR head. The code is available locally.",
|
||||
`To see the PR changes, diff against the remote base branch: git diff origin/${prMetadata.baseBranch}...HEAD`,
|
||||
"Do NOT diff against the local `main` branch — it may be stale. Always use origin/.",
|
||||
].join("\n");
|
||||
}
|
||||
return prMetadata.url;
|
||||
}
|
||||
|
||||
// Local mode — Codex has full file/git access
|
||||
const effectiveDiffType = diffType.startsWith("worktree:")
|
||||
? diffType.split(":").pop() || "uncommitted"
|
||||
: diffType;
|
||||
|
||||
switch (effectiveDiffType) {
|
||||
case "uncommitted":
|
||||
return "Review the current code changes (staged, unstaged, and untracked files) and provide prioritized findings.";
|
||||
|
||||
case "staged":
|
||||
return "Review the currently staged code changes (`git diff --staged`) and provide prioritized findings.";
|
||||
|
||||
case "unstaged":
|
||||
return "Review the unstaged code changes (tracked modifications and untracked files) and provide prioritized findings.";
|
||||
|
||||
case "last-commit":
|
||||
return "Review the code changes introduced in the last commit (`git diff HEAD~1..HEAD`) and provide prioritized findings.";
|
||||
|
||||
case "branch": {
|
||||
const base = options?.defaultBranch || "main";
|
||||
return `Review the code changes against the base branch '${base}'. Run \`git diff ${base}..HEAD\` to inspect the changes. Provide prioritized, actionable findings.`;
|
||||
}
|
||||
|
||||
case "merge-base": {
|
||||
const base = options?.defaultBranch || "main";
|
||||
return `Review the PR-style diff against base '${base}'. First find the common ancestor with \`git merge-base ${base} HEAD\`, then run \`git diff <merge-base>..HEAD\` using that commit to inspect only the changes introduced on this branch (matches GitHub's PR view). Provide prioritized, actionable findings.`;
|
||||
}
|
||||
|
||||
case "all":
|
||||
return "Review every file in the repository (all files shown as additions, diffed against an empty tree). Provide prioritized, actionable findings.";
|
||||
|
||||
default:
|
||||
// p4 or unknown — fall back to generic with inlined diff
|
||||
return [
|
||||
"Review the following code changes and provide prioritized findings.",
|
||||
"",
|
||||
"```diff",
|
||||
patch,
|
||||
"```",
|
||||
].join("\n");
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Command builder
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface CodexCommandOptions {
|
||||
cwd: string;
|
||||
outputPath: string;
|
||||
prompt: string;
|
||||
model?: string;
|
||||
reasoningEffort?: string;
|
||||
fastMode?: boolean;
|
||||
}
|
||||
|
||||
/** Build the `codex exec` argv array. Materializes the schema file on first call. */
|
||||
export async function buildCodexCommand(options: CodexCommandOptions): Promise<string[]> {
|
||||
const { cwd, outputPath, prompt, model, reasoningEffort, fastMode } = options;
|
||||
const schemaPath = await ensureSchemaFile();
|
||||
|
||||
const command = [
|
||||
"codex",
|
||||
...(model ? ["-m", model] : []),
|
||||
...(reasoningEffort ? ["-c", `model_reasoning_effort=${reasoningEffort}`] : []),
|
||||
...(fastMode ? ["-c", "service_tier=fast"] : []),
|
||||
"exec",
|
||||
"--output-schema", schemaPath,
|
||||
"-o", outputPath,
|
||||
"--full-auto",
|
||||
"--ephemeral",
|
||||
"-C", cwd,
|
||||
prompt,
|
||||
];
|
||||
|
||||
debugLog("BUILD_COMMAND", {
|
||||
cwd,
|
||||
outputPath,
|
||||
schemaPath,
|
||||
promptLength: prompt.length,
|
||||
command: command.map((c, i) => i === command.length - 1 ? `<prompt: ${c.length} chars>` : c),
|
||||
});
|
||||
|
||||
return command;
|
||||
}
|
||||
|
||||
/** Generate a unique temp file path for Codex output. */
|
||||
export function generateOutputPath(): string {
|
||||
return join(tmpdir(), `plannotator-codex-${crypto.randomUUID()}.json`);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Output parsing — matches Codex's native ReviewOutputEvent schema
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface CodexCodeLocation {
|
||||
absolute_file_path: string;
|
||||
line_range: { start: number; end: number };
|
||||
}
|
||||
|
||||
export interface CodexFinding {
|
||||
title: string;
|
||||
body: string;
|
||||
confidence_score: number;
|
||||
priority: number | null;
|
||||
code_location: CodexCodeLocation;
|
||||
}
|
||||
|
||||
export interface CodexReviewOutput {
|
||||
findings: CodexFinding[];
|
||||
overall_correctness: string;
|
||||
overall_explanation: string;
|
||||
overall_confidence_score: number;
|
||||
}
|
||||
|
||||
/** Read and parse the Codex -o output file. Returns null on any failure. */
|
||||
export async function parseCodexOutput(outputPath: string): Promise<CodexReviewOutput | null> {
|
||||
await debugLog("PARSE_OUTPUT_START", { outputPath });
|
||||
|
||||
try {
|
||||
if (!existsSync(outputPath)) {
|
||||
await debugLog("PARSE_OUTPUT_FILE_MISSING", outputPath);
|
||||
return null;
|
||||
}
|
||||
|
||||
const text = await readFile(outputPath, "utf-8");
|
||||
|
||||
// Clean up temp file
|
||||
try { await unlink(outputPath); } catch { /* ignore */ }
|
||||
|
||||
if (!text.trim()) {
|
||||
await debugLog("PARSE_OUTPUT_EMPTY");
|
||||
return null;
|
||||
}
|
||||
|
||||
const parsed = JSON.parse(text);
|
||||
if (!parsed || !Array.isArray(parsed.findings)) {
|
||||
await debugLog("PARSE_OUTPUT_INVALID_SHAPE", { hasFindings: !!parsed?.findings });
|
||||
return null;
|
||||
}
|
||||
|
||||
await debugLog("PARSE_OUTPUT_SUCCESS", {
|
||||
findingsCount: parsed.findings.length,
|
||||
overall_correctness: parsed.overall_correctness,
|
||||
overall_confidence_score: parsed.overall_confidence_score,
|
||||
});
|
||||
|
||||
return parsed as CodexReviewOutput;
|
||||
} catch (err) {
|
||||
await debugLog("PARSE_OUTPUT_ERROR", err instanceof Error ? err.message : String(err));
|
||||
// Clean up on error too
|
||||
try { await unlink(outputPath); } catch { /* ignore */ }
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Finding → external annotation transform
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface ReviewAnnotationInput {
|
||||
source: string;
|
||||
filePath: string;
|
||||
lineStart: number;
|
||||
lineEnd: number;
|
||||
type: string;
|
||||
side: string;
|
||||
scope: string;
|
||||
text: string;
|
||||
author: string;
|
||||
}
|
||||
|
||||
/** Transform review findings (provider-agnostic) into the external annotation format. */
|
||||
export function transformReviewFindings(
|
||||
findings: CodexFinding[],
|
||||
source: string,
|
||||
cwd?: string,
|
||||
author?: string,
|
||||
): ReviewAnnotationInput[] {
|
||||
const annotations = findings
|
||||
.filter((f) =>
|
||||
f.code_location?.absolute_file_path &&
|
||||
typeof f.code_location?.line_range?.start === "number" &&
|
||||
typeof f.code_location?.line_range?.end === "number"
|
||||
)
|
||||
.map((f) => ({
|
||||
source,
|
||||
filePath: toRelativePath(f.code_location.absolute_file_path, cwd),
|
||||
lineStart: f.code_location.line_range.start,
|
||||
lineEnd: f.code_location.line_range.end,
|
||||
type: "comment",
|
||||
side: "new",
|
||||
scope: "line",
|
||||
text: `${f.title}\n\n${f.body}`.trim(),
|
||||
author: author ?? "Review Agent",
|
||||
}));
|
||||
|
||||
debugLog("TRANSFORM_FINDINGS", {
|
||||
inputCount: findings.length,
|
||||
outputCount: annotations.length,
|
||||
annotations: annotations.map((a) => ({
|
||||
filePath: a.filePath,
|
||||
lineStart: a.lineStart,
|
||||
lineEnd: a.lineEnd,
|
||||
textPreview: a.text.slice(0, 80),
|
||||
})),
|
||||
});
|
||||
|
||||
return annotations;
|
||||
}
|
||||
227
extensions/plannotator/generated/config.ts
Normal file
227
extensions/plannotator/generated/config.ts
Normal file
@@ -0,0 +1,227 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/config.ts
|
||||
/**
|
||||
* Plannotator Config
|
||||
*
|
||||
* Reads/writes ~/.plannotator/config.json for persistent user settings.
|
||||
* Runtime-agnostic: uses only node:fs, node:os, node:child_process.
|
||||
*/
|
||||
|
||||
import { homedir } from "os";
|
||||
import { join } from "path";
|
||||
import { readFileSync, writeFileSync, mkdirSync, existsSync } from "fs";
|
||||
import { execSync } from "child_process";
|
||||
|
||||
export type DefaultDiffType = 'uncommitted' | 'unstaged' | 'staged' | 'merge-base' | 'all';
|
||||
|
||||
export interface DiffOptions {
|
||||
diffStyle?: 'split' | 'unified';
|
||||
overflow?: 'scroll' | 'wrap';
|
||||
diffIndicators?: 'bars' | 'classic' | 'none';
|
||||
lineDiffType?: 'word-alt' | 'word' | 'char' | 'none';
|
||||
showLineNumbers?: boolean;
|
||||
showDiffBackground?: boolean;
|
||||
fontFamily?: string;
|
||||
fontSize?: string;
|
||||
hideWhitespace?: boolean;
|
||||
defaultDiffType?: DefaultDiffType;
|
||||
}
|
||||
|
||||
/** Single conventional comment label entry stored in config.json */
|
||||
export interface CCLabelConfig {
|
||||
label: string;
|
||||
display: string;
|
||||
blocking: boolean;
|
||||
}
|
||||
|
||||
export type PromptSectionOverrides = Record<string, string | undefined>;
|
||||
|
||||
export type PromptRuntime =
|
||||
| "claude-code"
|
||||
| "opencode"
|
||||
| "copilot-cli"
|
||||
| "pi"
|
||||
| "codex"
|
||||
| "gemini-cli";
|
||||
|
||||
interface PromptSectionConfig {
|
||||
[key: string]: string | Partial<Record<PromptRuntime, PromptSectionOverrides>> | undefined;
|
||||
runtimes?: Partial<Record<PromptRuntime, PromptSectionOverrides>>;
|
||||
}
|
||||
|
||||
export interface PromptConfig {
|
||||
review?: PromptSectionConfig & {
|
||||
approved?: string;
|
||||
denied?: string;
|
||||
};
|
||||
plan?: PromptSectionConfig & {
|
||||
approved?: string;
|
||||
approvedWithNotes?: string;
|
||||
autoApproved?: string;
|
||||
denied?: string;
|
||||
};
|
||||
annotate?: PromptSectionConfig & {
|
||||
fileFeedback?: string;
|
||||
messageFeedback?: string;
|
||||
approved?: string;
|
||||
};
|
||||
}
|
||||
|
||||
const PROMPT_SECTIONS = ["review", "plan", "annotate"] as const;
|
||||
|
||||
export function mergePromptConfig(
|
||||
current?: PromptConfig,
|
||||
partial?: PromptConfig,
|
||||
): PromptConfig | undefined {
|
||||
if (!current && !partial) return undefined;
|
||||
|
||||
const result: Record<string, any> = { ...current, ...partial };
|
||||
|
||||
for (const section of PROMPT_SECTIONS) {
|
||||
const cur = current?.[section];
|
||||
const par = partial?.[section];
|
||||
if (cur || par) {
|
||||
result[section] = {
|
||||
...cur,
|
||||
...par,
|
||||
runtimes: (cur?.runtimes || par?.runtimes)
|
||||
? { ...cur?.runtimes, ...par?.runtimes }
|
||||
: undefined,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
return result as PromptConfig;
|
||||
}
|
||||
|
||||
export interface PlannotatorConfig {
|
||||
displayName?: string;
|
||||
diffOptions?: DiffOptions;
|
||||
prompts?: PromptConfig;
|
||||
conventionalComments?: boolean;
|
||||
/** null = explicitly cleared (use defaults), undefined = not set */
|
||||
conventionalLabels?: CCLabelConfig[] | null;
|
||||
/**
|
||||
* Enable `gh attestation verify` during CLI installation/upgrade.
|
||||
* Read by scripts/install.sh|ps1|cmd on every run (not by any runtime code).
|
||||
* When true, the installer runs build-provenance verification after the
|
||||
* SHA256 checksum check; requires `gh` CLI installed and authenticated
|
||||
* (`gh auth login`). OS-level opt-in only — no UI surface. Default: false.
|
||||
*/
|
||||
verifyAttestation?: boolean;
|
||||
/**
|
||||
* Enable Jina Reader for URL-to-markdown conversion during annotation.
|
||||
* When true (default), `plannotator annotate <url>` routes through
|
||||
* r.jina.ai for better JS-rendered page support and reader-mode extraction.
|
||||
* Set to false to always use plain fetch + Turndown.
|
||||
*/
|
||||
jina?: boolean;
|
||||
}
|
||||
|
||||
const CONFIG_DIR = join(homedir(), ".plannotator");
|
||||
const CONFIG_PATH = join(CONFIG_DIR, "config.json");
|
||||
|
||||
/**
|
||||
* Load config from ~/.plannotator/config.json.
|
||||
* Returns {} on missing file or malformed JSON.
|
||||
*/
|
||||
export function loadConfig(): PlannotatorConfig {
|
||||
try {
|
||||
if (!existsSync(CONFIG_PATH)) return {};
|
||||
const raw = readFileSync(CONFIG_PATH, "utf-8");
|
||||
const parsed = JSON.parse(raw);
|
||||
return typeof parsed === "object" && parsed !== null ? parsed : {};
|
||||
} catch (e) {
|
||||
process.stderr.write(`[plannotator] Warning: failed to read config.json: ${e}\n`);
|
||||
return {};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Save config by merging partial values into the existing file.
|
||||
* Creates ~/.plannotator/ directory if needed.
|
||||
*/
|
||||
export function saveConfig(partial: Partial<PlannotatorConfig>): void {
|
||||
try {
|
||||
const current = loadConfig();
|
||||
const mergedDiffOptions = (current.diffOptions || partial.diffOptions)
|
||||
? { ...current.diffOptions, ...partial.diffOptions }
|
||||
: undefined;
|
||||
const mergedPrompts = mergePromptConfig(current.prompts, partial.prompts);
|
||||
const merged = {
|
||||
...current,
|
||||
...partial,
|
||||
diffOptions: mergedDiffOptions,
|
||||
prompts: mergedPrompts,
|
||||
};
|
||||
mkdirSync(CONFIG_DIR, { recursive: true });
|
||||
writeFileSync(CONFIG_PATH, JSON.stringify(merged, null, 2) + "\n", "utf-8");
|
||||
} catch (e) {
|
||||
process.stderr.write(`[plannotator] Warning: failed to write config.json: ${e}\n`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect the git user name from `git config user.name`.
|
||||
* Returns null if git is unavailable, not in a repo, or user.name is not set.
|
||||
*/
|
||||
export function detectGitUser(): string | null {
|
||||
try {
|
||||
const name = execSync("git config user.name", { encoding: "utf-8", timeout: 3000 }).trim();
|
||||
return name || null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Build the serverConfig payload for API responses.
|
||||
* Reads config.json fresh each call so the response reflects the latest file on disk.
|
||||
*/
|
||||
export function getServerConfig(gitUser: string | null): {
|
||||
displayName?: string;
|
||||
diffOptions?: DiffOptions;
|
||||
gitUser?: string;
|
||||
conventionalComments?: boolean;
|
||||
conventionalLabels?: CCLabelConfig[] | null;
|
||||
} {
|
||||
const cfg = loadConfig();
|
||||
return {
|
||||
displayName: cfg.displayName,
|
||||
diffOptions: cfg.diffOptions,
|
||||
gitUser: gitUser ?? undefined,
|
||||
...(cfg.conventionalComments !== undefined && { conventionalComments: cfg.conventionalComments }),
|
||||
...(cfg.conventionalLabels !== undefined && { conventionalLabels: cfg.conventionalLabels }),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Read the user's preferred default diff type from config, falling back to 'unstaged'.
|
||||
*/
|
||||
export function resolveDefaultDiffType(cfg?: PlannotatorConfig): DefaultDiffType {
|
||||
const v = cfg?.diffOptions?.defaultDiffType as string | undefined;
|
||||
if (v === 'branch') return 'merge-base';
|
||||
return v === 'uncommitted' || v === 'unstaged' || v === 'staged' || v === 'merge-base' || v === 'all' ? v : 'unstaged';
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve whether to use Jina Reader for URL annotation.
|
||||
*
|
||||
* Priority (highest wins):
|
||||
* --no-jina CLI flag → PLANNOTATOR_JINA env var → config.jina → default true
|
||||
*/
|
||||
export function resolveUseJina(cliNoJina: boolean, config: PlannotatorConfig): boolean {
|
||||
// CLI flag has highest priority
|
||||
if (cliNoJina) return false;
|
||||
|
||||
// Environment variable
|
||||
const envVal = process.env.PLANNOTATOR_JINA;
|
||||
if (envVal !== undefined) {
|
||||
return envVal === "1" || envVal.toLowerCase() === "true";
|
||||
}
|
||||
|
||||
// Config file
|
||||
if (config.jina !== undefined) return config.jina;
|
||||
|
||||
// Default: enabled
|
||||
return true;
|
||||
}
|
||||
65
extensions/plannotator/generated/draft.ts
Normal file
65
extensions/plannotator/generated/draft.ts
Normal file
@@ -0,0 +1,65 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/draft.ts
|
||||
/**
|
||||
* Draft Storage
|
||||
*
|
||||
* Persists annotation drafts to ~/.plannotator/drafts/ so they survive
|
||||
* server crashes. Each draft is keyed by a content hash of the plan/diff
|
||||
* it was created against.
|
||||
*
|
||||
* Runtime-agnostic: uses only node:fs, node:path, node:os, node:crypto.
|
||||
*/
|
||||
|
||||
import { homedir } from "os";
|
||||
import { join } from "path";
|
||||
import { mkdirSync, writeFileSync, readFileSync, unlinkSync, existsSync } from "fs";
|
||||
import { createHash } from "crypto";
|
||||
|
||||
/**
|
||||
* Get the drafts directory, creating it if needed.
|
||||
*/
|
||||
export function getDraftDir(): string {
|
||||
const dir = join(homedir(), ".plannotator", "drafts");
|
||||
mkdirSync(dir, { recursive: true });
|
||||
return dir;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a stable key from content using truncated SHA-256.
|
||||
* Same content always produces the same key across server restarts.
|
||||
*/
|
||||
export function contentHash(content: string): string {
|
||||
return createHash("sha256").update(content).digest("hex").slice(0, 16);
|
||||
}
|
||||
|
||||
/**
|
||||
* Save a draft to disk.
|
||||
*/
|
||||
export function saveDraft(key: string, data: object): void {
|
||||
const dir = getDraftDir();
|
||||
writeFileSync(join(dir, `${key}.json`), JSON.stringify(data), "utf-8");
|
||||
}
|
||||
|
||||
/**
|
||||
* Load a draft from disk. Returns null if not found.
|
||||
*/
|
||||
export function loadDraft(key: string): object | null {
|
||||
const filePath = join(getDraftDir(), `${key}.json`);
|
||||
try {
|
||||
if (!existsSync(filePath)) return null;
|
||||
return JSON.parse(readFileSync(filePath, "utf-8"));
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete a draft from disk. No-op if not found.
|
||||
*/
|
||||
export function deleteDraft(key: string): void {
|
||||
const filePath = join(getDraftDir(), `${key}.json`);
|
||||
try {
|
||||
if (existsSync(filePath)) unlinkSync(filePath);
|
||||
} catch {
|
||||
// Ignore delete failures
|
||||
}
|
||||
}
|
||||
398
extensions/plannotator/generated/external-annotation.ts
Normal file
398
extensions/plannotator/generated/external-annotation.ts
Normal file
@@ -0,0 +1,398 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/external-annotation.ts
|
||||
/**
|
||||
* External Annotations — shared types, store logic, and SSE helpers.
|
||||
*
|
||||
* Runtime-agnostic: no node:fs, no node:http, no Bun APIs.
|
||||
* Both the Bun server handler and Pi server handler import this module
|
||||
* and wrap it with their respective HTTP transport layers.
|
||||
*
|
||||
* The store is generic — plan servers store Annotation objects,
|
||||
* review servers store CodeAnnotation objects. The mode-specific
|
||||
* input transformers handle validation and field assignment.
|
||||
*/
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Constraint for any annotation type the store can hold. */
|
||||
export type StorableAnnotation = { id: string; source?: string };
|
||||
|
||||
export type ExternalAnnotationEvent<T = unknown> =
|
||||
| { type: "snapshot"; annotations: T[] }
|
||||
| { type: "add"; annotations: T[] }
|
||||
| { type: "remove"; ids: string[] }
|
||||
| { type: "clear"; source?: string }
|
||||
| { type: "update"; id: string; annotation: T };
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// SSE helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Heartbeat comment to keep SSE connections alive (sent every 30s). */
|
||||
export const HEARTBEAT_COMMENT = ":\n\n";
|
||||
|
||||
/** Interval in ms between heartbeat comments. */
|
||||
export const HEARTBEAT_INTERVAL_MS = 30_000;
|
||||
|
||||
/** Encode an event as an SSE `data:` line. */
|
||||
export function serializeSSEEvent<T>(event: ExternalAnnotationEvent<T>): string {
|
||||
return `data: ${JSON.stringify(event)}\n\n`;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Input validation — shared helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface ParseError {
|
||||
error: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Unwrap a POST body into an array of raw input objects.
|
||||
*
|
||||
* Accepts either:
|
||||
* - A single annotation object: `{ source: "...", ... }`
|
||||
* - A batch wrapper: `{ annotations: [{ source: "...", ... }, ...] }`
|
||||
*/
|
||||
function unwrapBody(body: unknown): Record<string, unknown>[] | ParseError {
|
||||
if (!body || typeof body !== "object") {
|
||||
return { error: "Request body must be a JSON object" };
|
||||
}
|
||||
|
||||
const obj = body as Record<string, unknown>;
|
||||
|
||||
// Batch format: { annotations: [...] }
|
||||
if (Array.isArray(obj.annotations)) {
|
||||
if (obj.annotations.length === 0) {
|
||||
return { error: "annotations array must not be empty" };
|
||||
}
|
||||
const items: Record<string, unknown>[] = [];
|
||||
for (let i = 0; i < obj.annotations.length; i++) {
|
||||
const item = obj.annotations[i];
|
||||
if (!item || typeof item !== "object") {
|
||||
return { error: `annotations[${i}] must be an object` };
|
||||
}
|
||||
items.push(item as Record<string, unknown>);
|
||||
}
|
||||
return items;
|
||||
}
|
||||
|
||||
// Single format: { source: "...", ... }
|
||||
if (typeof obj.source === "string") {
|
||||
return [obj as Record<string, unknown>];
|
||||
}
|
||||
|
||||
return { error: 'Missing required "source" field or "annotations" array' };
|
||||
}
|
||||
|
||||
function requireString(obj: Record<string, unknown>, field: string, index: number): string | ParseError {
|
||||
const val = obj[field];
|
||||
if (typeof val !== "string" || val.length === 0) {
|
||||
return { error: `annotations[${index}] missing required "${field}" field` };
|
||||
}
|
||||
return val;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Plan mode transformer — produces Annotation objects
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** The Annotation type shape for plan mode (mirrors packages/ui/types.ts). */
|
||||
interface PlanAnnotation {
|
||||
id: string;
|
||||
blockId: string;
|
||||
startOffset: number;
|
||||
endOffset: number;
|
||||
type: string; // AnnotationType value
|
||||
text?: string;
|
||||
originalText: string;
|
||||
createdA: number;
|
||||
author?: string;
|
||||
source?: string;
|
||||
}
|
||||
|
||||
const VALID_PLAN_TYPES = ["DELETION", "COMMENT", "GLOBAL_COMMENT"];
|
||||
|
||||
export function transformPlanInput(
|
||||
body: unknown,
|
||||
): { annotations: PlanAnnotation[] } | ParseError {
|
||||
const items = unwrapBody(body);
|
||||
if ("error" in items) return items;
|
||||
|
||||
const annotations: PlanAnnotation[] = [];
|
||||
for (let i = 0; i < items.length; i++) {
|
||||
const obj = items[i];
|
||||
|
||||
const source = requireString(obj, "source", i);
|
||||
if (typeof source !== "string") return source;
|
||||
|
||||
// Must have text content
|
||||
if (typeof obj.text !== "string" || obj.text.length === 0) {
|
||||
return { error: `annotations[${i}] missing required "text" field` };
|
||||
}
|
||||
|
||||
// Validate type if provided, default to GLOBAL_COMMENT
|
||||
const type = typeof obj.type === "string" ? obj.type : "GLOBAL_COMMENT";
|
||||
if (!VALID_PLAN_TYPES.includes(type)) {
|
||||
return {
|
||||
error: `annotations[${i}] invalid type "${type}". Must be one of: ${VALID_PLAN_TYPES.join(", ")}`,
|
||||
};
|
||||
}
|
||||
|
||||
// DELETION requires originalText (the text to remove)
|
||||
if (type === "DELETION" && (typeof obj.originalText !== "string" || obj.originalText.length === 0)) {
|
||||
return { error: `annotations[${i}] DELETION type requires non-empty "originalText" field` };
|
||||
}
|
||||
|
||||
// COMMENT requires originalText so the renderer can pin it to a phrase.
|
||||
// External agents that want sidebar-only feedback should use GLOBAL_COMMENT
|
||||
// instead — without a phrase to anchor to, a COMMENT renders as an empty
|
||||
// quote bubble in the sidebar and exports as `Feedback on: ""`.
|
||||
if (type === "COMMENT" && (typeof obj.originalText !== "string" || obj.originalText.length === 0)) {
|
||||
return {
|
||||
error: `annotations[${i}] COMMENT requires non-empty "originalText" field. Use GLOBAL_COMMENT for sidebar-only feedback.`,
|
||||
};
|
||||
}
|
||||
|
||||
annotations.push({
|
||||
id: crypto.randomUUID(),
|
||||
blockId: "external",
|
||||
startOffset: 0,
|
||||
endOffset: 0,
|
||||
type,
|
||||
text: String(obj.text),
|
||||
originalText: typeof obj.originalText === "string" ? obj.originalText : "",
|
||||
createdA: Date.now(),
|
||||
author: typeof obj.author === "string" ? obj.author : undefined,
|
||||
source,
|
||||
});
|
||||
}
|
||||
|
||||
return { annotations };
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Review mode transformer — produces CodeAnnotation objects
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** The CodeAnnotation type shape for review mode (mirrors packages/ui/types.ts). */
|
||||
interface ReviewAnnotation {
|
||||
id: string;
|
||||
type: string; // CodeAnnotationType value
|
||||
scope?: string;
|
||||
filePath: string;
|
||||
lineStart: number;
|
||||
lineEnd: number;
|
||||
side: string;
|
||||
text?: string;
|
||||
suggestedCode?: string;
|
||||
originalCode?: string;
|
||||
createdAt: number;
|
||||
author?: string;
|
||||
source?: string;
|
||||
// Agent review metadata (optional — only set by Claude review findings)
|
||||
severity?: string; // "important" | "nit" | "pre_existing"
|
||||
reasoning?: string; // Validation chain explaining how the issue was confirmed
|
||||
}
|
||||
|
||||
const VALID_REVIEW_TYPES = ["comment", "suggestion", "concern"];
|
||||
const VALID_SIDES = ["old", "new"];
|
||||
const VALID_SCOPES = ["line", "file"];
|
||||
|
||||
export function transformReviewInput(
|
||||
body: unknown,
|
||||
): { annotations: ReviewAnnotation[] } | ParseError {
|
||||
const items = unwrapBody(body);
|
||||
if ("error" in items) return items;
|
||||
|
||||
const annotations: ReviewAnnotation[] = [];
|
||||
for (let i = 0; i < items.length; i++) {
|
||||
const obj = items[i];
|
||||
|
||||
const source = requireString(obj, "source", i);
|
||||
if (typeof source !== "string") return source;
|
||||
|
||||
const filePath = requireString(obj, "filePath", i);
|
||||
if (typeof filePath !== "string") return filePath;
|
||||
|
||||
if (typeof obj.lineStart !== "number") {
|
||||
return { error: `annotations[${i}] missing required "lineStart" field` };
|
||||
}
|
||||
if (typeof obj.lineEnd !== "number") {
|
||||
return { error: `annotations[${i}] missing required "lineEnd" field` };
|
||||
}
|
||||
|
||||
// side: optional, defaults to "new"
|
||||
const side = typeof obj.side === "string" ? obj.side : "new";
|
||||
if (!VALID_SIDES.includes(side)) {
|
||||
return {
|
||||
error: `annotations[${i}] invalid side "${side}". Must be one of: ${VALID_SIDES.join(", ")}`,
|
||||
};
|
||||
}
|
||||
|
||||
// type: optional, defaults to "comment"
|
||||
const type = typeof obj.type === "string" ? obj.type : "comment";
|
||||
if (!VALID_REVIEW_TYPES.includes(type)) {
|
||||
return {
|
||||
error: `annotations[${i}] invalid type "${type}". Must be one of: ${VALID_REVIEW_TYPES.join(", ")}`,
|
||||
};
|
||||
}
|
||||
|
||||
// scope: optional, defaults to "line"
|
||||
const scope = typeof obj.scope === "string" ? obj.scope : "line";
|
||||
if (!VALID_SCOPES.includes(scope)) {
|
||||
return {
|
||||
error: `annotations[${i}] invalid scope "${scope}". Must be one of: ${VALID_SCOPES.join(", ")}`,
|
||||
};
|
||||
}
|
||||
|
||||
// Must have at least text or suggestedCode
|
||||
if (typeof obj.text !== "string" && typeof obj.suggestedCode !== "string") {
|
||||
return {
|
||||
error: `annotations[${i}] must have at least one of: text, suggestedCode`,
|
||||
};
|
||||
}
|
||||
|
||||
annotations.push({
|
||||
id: crypto.randomUUID(),
|
||||
type,
|
||||
scope,
|
||||
filePath,
|
||||
lineStart: obj.lineStart,
|
||||
lineEnd: obj.lineEnd,
|
||||
side,
|
||||
text: typeof obj.text === "string" ? obj.text : undefined,
|
||||
suggestedCode: typeof obj.suggestedCode === "string" ? obj.suggestedCode : undefined,
|
||||
originalCode: typeof obj.originalCode === "string" ? obj.originalCode : undefined,
|
||||
createdAt: Date.now(),
|
||||
author: typeof obj.author === "string" ? obj.author : undefined,
|
||||
source,
|
||||
// Agent review metadata (optional — only set by Claude review findings)
|
||||
...(typeof obj.severity === "string" && { severity: obj.severity }),
|
||||
...(typeof obj.reasoning === "string" && { reasoning: obj.reasoning }),
|
||||
});
|
||||
}
|
||||
|
||||
return { annotations };
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Annotation Store (generic)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
type MutationListener<T> = (event: ExternalAnnotationEvent<T>) => void;
|
||||
|
||||
export interface AnnotationStore<T extends StorableAnnotation> {
|
||||
/** Add fully-formed annotations. Returns the added annotations. */
|
||||
add(items: T[]): T[];
|
||||
/** Remove an annotation by ID. Returns true if found. */
|
||||
remove(id: string): boolean;
|
||||
/** Remove all annotations from a specific source. Returns count removed. */
|
||||
clearBySource(source: string): number;
|
||||
/** Update an annotation by ID. Returns the updated annotation, or null if not found. */
|
||||
update(id: string, fields: Partial<T>): T | null;
|
||||
/** Remove all annotations. Returns count removed. */
|
||||
clearAll(): number;
|
||||
/** Get all annotations (snapshot). */
|
||||
getAll(): T[];
|
||||
/** Monotonic version counter — incremented on every mutation. */
|
||||
readonly version: number;
|
||||
/** Register a listener for mutation events. Returns unsubscribe function. */
|
||||
onMutation(listener: MutationListener<T>): () => void;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create an in-memory annotation store.
|
||||
*
|
||||
* The store is runtime-agnostic — it holds data and emits events.
|
||||
* HTTP transport (SSE broadcasting, request parsing) is handled by
|
||||
* the server-specific adapter (Bun or Pi).
|
||||
*/
|
||||
export function createAnnotationStore<T extends StorableAnnotation>(): AnnotationStore<T> {
|
||||
const annotations: T[] = [];
|
||||
const listeners = new Set<MutationListener<T>>();
|
||||
let version = 0;
|
||||
|
||||
function emit(event: ExternalAnnotationEvent<T>): void {
|
||||
for (const listener of listeners) {
|
||||
try {
|
||||
listener(event);
|
||||
} catch {
|
||||
// Don't let a failing listener break the store
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
add(items) {
|
||||
if (items.length > 0) {
|
||||
for (const item of items) {
|
||||
annotations.push(item);
|
||||
}
|
||||
version++;
|
||||
emit({ type: "add", annotations: items });
|
||||
}
|
||||
return items;
|
||||
},
|
||||
|
||||
remove(id) {
|
||||
const idx = annotations.findIndex((a) => a.id === id);
|
||||
if (idx === -1) return false;
|
||||
annotations.splice(idx, 1);
|
||||
version++;
|
||||
emit({ type: "remove", ids: [id] });
|
||||
return true;
|
||||
},
|
||||
|
||||
update(id, fields) {
|
||||
const idx = annotations.findIndex((a) => a.id === id);
|
||||
if (idx === -1) return null;
|
||||
const merged = { ...annotations[idx], ...fields, id } as T;
|
||||
annotations[idx] = merged;
|
||||
version++;
|
||||
emit({ type: "update", id, annotation: merged });
|
||||
return merged;
|
||||
},
|
||||
|
||||
clearBySource(source) {
|
||||
const before = annotations.length;
|
||||
for (let i = annotations.length - 1; i >= 0; i--) {
|
||||
if (annotations[i].source === source) {
|
||||
annotations.splice(i, 1);
|
||||
}
|
||||
}
|
||||
const removed = before - annotations.length;
|
||||
if (removed > 0) {
|
||||
version++;
|
||||
emit({ type: "clear", source });
|
||||
}
|
||||
return removed;
|
||||
},
|
||||
|
||||
clearAll() {
|
||||
const count = annotations.length;
|
||||
if (count > 0) {
|
||||
annotations.length = 0;
|
||||
version++;
|
||||
emit({ type: "clear" });
|
||||
}
|
||||
return count;
|
||||
},
|
||||
|
||||
getAll() {
|
||||
return [...annotations];
|
||||
},
|
||||
|
||||
get version() {
|
||||
return version;
|
||||
},
|
||||
|
||||
onMutation(listener) {
|
||||
listeners.add(listener);
|
||||
return () => {
|
||||
listeners.delete(listener);
|
||||
};
|
||||
},
|
||||
};
|
||||
}
|
||||
6
extensions/plannotator/generated/favicon.ts
Normal file
6
extensions/plannotator/generated/favicon.ts
Normal file
@@ -0,0 +1,6 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/favicon.ts
|
||||
export const FAVICON_SVG = `<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 64 64">
|
||||
<rect width="64" height="64" rx="14" fill="#070b14"/>
|
||||
<rect x="12" y="28" width="40" height="14" rx="3" fill="#E0BA55" opacity="0.35"/>
|
||||
<text x="32" y="46" text-anchor="middle" font-family="Inter,system-ui,sans-serif" font-weight="800" font-size="42" fill="white">P</text>
|
||||
</svg>`;
|
||||
30
extensions/plannotator/generated/feedback-templates.ts
Normal file
30
extensions/plannotator/generated/feedback-templates.ts
Normal file
@@ -0,0 +1,30 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/feedback-templates.ts
|
||||
/**
|
||||
* Shared feedback templates for all agent integrations.
|
||||
*
|
||||
* The plan deny template was tuned in #224 / commit 3dca977 to use strong
|
||||
* directive framing — Claude was ignoring softer phrasing.
|
||||
*
|
||||
* IMPORTANT: This module is imported by packages/ui/utils/parser.ts which is
|
||||
* bundled into the browser SPA. It must NOT import from ./prompts or ./config
|
||||
* (which depend on node:fs, node:os, node:child_process). Keep it self-contained.
|
||||
*
|
||||
* Server-side call sites use getPlanDeniedPrompt() from ./prompts directly.
|
||||
* This module is only kept for the browser's wrapFeedbackForAgent clipboard feature.
|
||||
*/
|
||||
|
||||
export interface PlanDenyFeedbackOptions {
|
||||
planFilePath?: string;
|
||||
}
|
||||
|
||||
export const planDenyFeedback = (
|
||||
feedback: string,
|
||||
toolName: string = "ExitPlanMode",
|
||||
options?: PlanDenyFeedbackOptions,
|
||||
): string => {
|
||||
const planFileRule = options?.planFilePath
|
||||
? `- Your plan is saved at: ${options.planFilePath}\n You can edit this file to make targeted changes, then pass its path to ${toolName}.\n`
|
||||
: "";
|
||||
|
||||
return `YOUR PLAN WAS NOT APPROVED.\n\nYou MUST revise the plan to address ALL of the feedback below before calling ${toolName} again.\n\nRules:\n${planFileRule}- Do not resubmit the same plan unchanged.\n- Do NOT change the plan title (first # heading) unless the user explicitly asks you to.\n\n${feedback || "Plan changes requested"}`;
|
||||
};
|
||||
33
extensions/plannotator/generated/html-to-markdown.ts
Normal file
33
extensions/plannotator/generated/html-to-markdown.ts
Normal file
@@ -0,0 +1,33 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/html-to-markdown.ts
|
||||
/**
|
||||
* HTML-to-Markdown conversion via Turndown.
|
||||
*
|
||||
* Shared between the CLI (single HTML file / URL) and the server
|
||||
* (on-demand conversion for HTML files in folder mode).
|
||||
*/
|
||||
|
||||
import TurndownService from "turndown";
|
||||
// @ts-expect-error — @joplin/turndown-plugin-gfm ships JS only, no .d.ts (see declarations.d.ts for local types)
|
||||
import { gfm } from "@joplin/turndown-plugin-gfm";
|
||||
|
||||
const td = new TurndownService({
|
||||
headingStyle: "atx",
|
||||
codeBlockStyle: "fenced",
|
||||
bulletListMarker: "-",
|
||||
});
|
||||
|
||||
td.use(gfm);
|
||||
|
||||
// Strip <style> and <script> tags entirely (Turndown keeps unrecognised
|
||||
// tags as blank by default, but their text content can leak through).
|
||||
td.remove(["style", "script", "noscript"]);
|
||||
|
||||
/**
|
||||
* Convert an HTML string to Markdown.
|
||||
*
|
||||
* Uses a module-level TurndownService singleton (stateless, safe to reuse).
|
||||
* GFM tables, strikethrough, and task lists are supported via turndown-plugin-gfm.
|
||||
*/
|
||||
export function htmlToMarkdown(html: string): string {
|
||||
return td.turndown(html);
|
||||
}
|
||||
244
extensions/plannotator/generated/integrations-common.ts
Normal file
244
extensions/plannotator/generated/integrations-common.ts
Normal file
@@ -0,0 +1,244 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/integrations-common.ts
|
||||
import { existsSync, readFileSync } from "fs";
|
||||
import { join } from "path";
|
||||
|
||||
// --- Types ---
|
||||
|
||||
export interface ObsidianConfig {
|
||||
vaultPath: string;
|
||||
folder: string;
|
||||
plan: string;
|
||||
filenameFormat?: string; // Custom format string, e.g. '{YYYY}-{MM}-{DD} - {title}'
|
||||
filenameSeparator?: "space" | "dash" | "underscore"; // Replace spaces in filename
|
||||
}
|
||||
|
||||
export interface BearConfig {
|
||||
plan: string;
|
||||
customTags?: string;
|
||||
tagPosition?: "prepend" | "append";
|
||||
}
|
||||
|
||||
export interface OctarineConfig {
|
||||
plan: string;
|
||||
workspace: string;
|
||||
folder: string;
|
||||
}
|
||||
|
||||
export interface IntegrationResult {
|
||||
success: boolean;
|
||||
error?: string;
|
||||
path?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect Obsidian vaults by reading Obsidian's config file
|
||||
* Returns array of vault paths found on the system
|
||||
*/
|
||||
export function detectObsidianVaults(): string[] {
|
||||
try {
|
||||
const home = process.env.HOME || process.env.USERPROFILE || "";
|
||||
let configPath: string;
|
||||
|
||||
// Platform-specific config locations
|
||||
if (process.platform === "darwin") {
|
||||
configPath = join(
|
||||
home,
|
||||
"Library/Application Support/obsidian/obsidian.json",
|
||||
);
|
||||
} else if (process.platform === "win32") {
|
||||
const appData = process.env.APPDATA || join(home, "AppData/Roaming");
|
||||
configPath = join(appData, "obsidian/obsidian.json");
|
||||
} else {
|
||||
// Linux
|
||||
configPath = join(home, ".config/obsidian/obsidian.json");
|
||||
}
|
||||
|
||||
if (!existsSync(configPath)) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const configContent = readFileSync(configPath, "utf-8");
|
||||
const config = JSON.parse(configContent);
|
||||
|
||||
if (!config.vaults || typeof config.vaults !== "object") {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Extract vault paths, filter to ones that exist
|
||||
const vaults: string[] = [];
|
||||
for (const vaultId of Object.keys(config.vaults)) {
|
||||
const vault = config.vaults[vaultId];
|
||||
if (vault.path && existsSync(vault.path)) {
|
||||
vaults.push(vault.path);
|
||||
}
|
||||
}
|
||||
|
||||
return vaults;
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
// --- Frontmatter and Filename Generation ---
|
||||
|
||||
/**
|
||||
* Generate frontmatter for the note
|
||||
*/
|
||||
export function generateFrontmatter(tags: string[]): string {
|
||||
const now = new Date().toISOString();
|
||||
const tagList = tags.map((t) => t.toLowerCase()).join(", ");
|
||||
return `---
|
||||
created: ${now}
|
||||
source: plannotator
|
||||
tags: [${tagList}]
|
||||
---`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract title from markdown (first H1 heading)
|
||||
*/
|
||||
export function extractTitle(markdown: string): string {
|
||||
const h1Match = markdown.match(
|
||||
/^#\s+(?:Implementation\s+Plan:|Plan:)?\s*(.+)$/im,
|
||||
);
|
||||
if (h1Match) {
|
||||
// Clean up the title for use as filename
|
||||
return h1Match[1]
|
||||
.trim()
|
||||
.replace(/[<>:"/\\|?*(){}\[\]#~`]/g, "") // Remove invalid/problematic filename chars
|
||||
.replace(/\s+/g, " ") // Normalize whitespace
|
||||
.trim() // Re-trim after stripping
|
||||
.slice(0, 50); // Limit length
|
||||
}
|
||||
return "Plan";
|
||||
}
|
||||
|
||||
/** Default filename format matching original behavior */
|
||||
export const DEFAULT_FILENAME_FORMAT =
|
||||
"{title} - {Mon} {D}, {YYYY} {h}-{mm}{ampm}";
|
||||
|
||||
/**
|
||||
* Generate filename from a format string with variable substitution.
|
||||
*
|
||||
* Supported variables:
|
||||
* {title} - Plan title from first H1 heading
|
||||
* {YYYY} - 4-digit year
|
||||
* {MM} - 2-digit month (01-12)
|
||||
* {DD} - 2-digit day (01-31)
|
||||
* {Mon} - Abbreviated month name (Jan, Feb, ...)
|
||||
* {D} - Day without leading zero
|
||||
* {HH} - 2-digit hour, 24h (00-23)
|
||||
* {h} - Hour without leading zero, 12h
|
||||
* {hh} - 2-digit hour, 12h (01-12)
|
||||
* {mm} - 2-digit minutes (00-59)
|
||||
* {ss} - 2-digit seconds (00-59)
|
||||
* {ampm} - am/pm
|
||||
*
|
||||
* Default format: '{title} - {Mon} {D}, {YYYY} {h}-{mm}{ampm}'
|
||||
* Example output: 'User Authentication - Jan 2, 2026 2-30pm.md'
|
||||
*/
|
||||
export function generateFilename(
|
||||
markdown: string,
|
||||
format?: string,
|
||||
separator?: "space" | "dash" | "underscore",
|
||||
): string {
|
||||
const title = extractTitle(markdown);
|
||||
const now = new Date();
|
||||
|
||||
const months = [
|
||||
"Jan",
|
||||
"Feb",
|
||||
"Mar",
|
||||
"Apr",
|
||||
"May",
|
||||
"Jun",
|
||||
"Jul",
|
||||
"Aug",
|
||||
"Sep",
|
||||
"Oct",
|
||||
"Nov",
|
||||
"Dec",
|
||||
];
|
||||
|
||||
const hour24 = now.getHours();
|
||||
const hour12 = hour24 % 12 || 12;
|
||||
const ampm = hour24 >= 12 ? "pm" : "am";
|
||||
|
||||
const vars: Record<string, string> = {
|
||||
title,
|
||||
YYYY: String(now.getFullYear()),
|
||||
MM: String(now.getMonth() + 1).padStart(2, "0"),
|
||||
DD: String(now.getDate()).padStart(2, "0"),
|
||||
Mon: months[now.getMonth()],
|
||||
D: String(now.getDate()),
|
||||
HH: String(hour24).padStart(2, "0"),
|
||||
h: String(hour12),
|
||||
hh: String(hour12).padStart(2, "0"),
|
||||
mm: String(now.getMinutes()).padStart(2, "0"),
|
||||
ss: String(now.getSeconds()).padStart(2, "0"),
|
||||
ampm,
|
||||
};
|
||||
|
||||
const template = format?.trim() || DEFAULT_FILENAME_FORMAT;
|
||||
const result = template.replace(
|
||||
/\{(\w+)\}/g,
|
||||
(match, key) => vars[key] ?? match,
|
||||
);
|
||||
|
||||
// Sanitize: remove characters invalid in filenames
|
||||
let sanitized = result
|
||||
.replace(/[<>:"/\\|?*]/g, "")
|
||||
.replace(/\s+/g, " ")
|
||||
.trim();
|
||||
|
||||
// Apply separator preference (replace spaces with dash or underscore)
|
||||
if (separator === "dash") {
|
||||
sanitized = sanitized.replace(/ /g, "-");
|
||||
} else if (separator === "underscore") {
|
||||
sanitized = sanitized.replace(/ /g, "_");
|
||||
}
|
||||
|
||||
return sanitized.endsWith(".md") ? sanitized : `${sanitized}.md`;
|
||||
}
|
||||
|
||||
// --- Bear Integration ---
|
||||
|
||||
export function stripH1(plan: string): string {
|
||||
return plan.replace(/^#\s+.+\n?/m, "").trimStart();
|
||||
}
|
||||
|
||||
export function buildHashtags(
|
||||
customTags: string | undefined,
|
||||
autoTags: string[],
|
||||
): string {
|
||||
if (customTags?.trim()) {
|
||||
return customTags
|
||||
.split(",")
|
||||
.map((t) => `#${t.trim()}`)
|
||||
.filter((t) => t !== "#")
|
||||
.join(" ");
|
||||
}
|
||||
return autoTags.map((t) => `#${t}`).join(" ");
|
||||
}
|
||||
|
||||
export function buildBearContent(
|
||||
body: string,
|
||||
hashtags: string,
|
||||
tagPosition: "prepend" | "append",
|
||||
): string {
|
||||
return tagPosition === "prepend"
|
||||
? `${hashtags}\n\n${body}`
|
||||
: `${body}\n\n${hashtags}`;
|
||||
}
|
||||
|
||||
// --- Octarine Integration ---
|
||||
|
||||
/**
|
||||
* Generate YAML frontmatter for an Octarine note.
|
||||
* Uses Octarine's property format (list-style tags, Status, Author, Last Edited).
|
||||
*/
|
||||
export function generateOctarineFrontmatter(tags: string[]): string {
|
||||
const now = new Date().toISOString().slice(0, 16); // YYYY-MM-DDTHH:MM
|
||||
const tagLines = tags.map((t) => ` - ${t.toLowerCase()}`).join("\n");
|
||||
return `---\ntags:\n${tagLines}\nStatus: Draft\nAuthor: plannotator\nLast Edited: ${now}\n---`;
|
||||
}
|
||||
19
extensions/plannotator/generated/path-utils.ts
Normal file
19
extensions/plannotator/generated/path-utils.ts
Normal file
@@ -0,0 +1,19 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/server/path-utils.ts
|
||||
/**
|
||||
* Strip a cwd prefix from an absolute path to get a repo-relative path.
|
||||
* Used by review agent transforms to convert absolute file paths from
|
||||
* agent output into diff-compatible relative paths.
|
||||
*
|
||||
* Uses path.relative for cross-platform support (Windows backslashes)
|
||||
* and normalizes to forward slashes for git diff path matching.
|
||||
*/
|
||||
import { relative } from "node:path";
|
||||
|
||||
export function toRelativePath(absolutePath: string, cwd?: string): string {
|
||||
if (!cwd) return absolutePath;
|
||||
const rel = relative(cwd, absolutePath);
|
||||
// Don't relativize if the result goes outside cwd (different drive, symlink escape)
|
||||
if (rel.startsWith("..")) return absolutePath;
|
||||
// Normalize to forward slashes for diff path matching
|
||||
return rel.replace(/\\/g, "/");
|
||||
}
|
||||
662
extensions/plannotator/generated/pr-github.ts
Normal file
662
extensions/plannotator/generated/pr-github.ts
Normal file
@@ -0,0 +1,662 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/pr-github.ts
|
||||
/**
|
||||
* GitHub-specific PR provider implementation.
|
||||
*
|
||||
* All functions use the `gh` CLI via the PRRuntime abstraction.
|
||||
*/
|
||||
|
||||
import type { PRRuntime, PRMetadata, PRContext, PRReviewThread, PRThreadComment, PRReviewFileComment, CommandResult, PRStackTree, PRStackNode, PRListItem } from "./pr-provider";
|
||||
import { encodeApiFilePath } from "./pr-provider";
|
||||
|
||||
// GitHub-specific PRRef shape (used internally)
|
||||
interface GhPRRef {
|
||||
platform: "github";
|
||||
host: string;
|
||||
owner: string;
|
||||
repo: string;
|
||||
number: number;
|
||||
}
|
||||
|
||||
/** Build the --repo flag value: HOST/OWNER/REPO for GHE, OWNER/REPO for github.com */
|
||||
function repoFlag(ref: GhPRRef): string {
|
||||
if (ref.host !== "github.com") {
|
||||
return `${ref.host}/${ref.owner}/${ref.repo}`;
|
||||
}
|
||||
return `${ref.owner}/${ref.repo}`;
|
||||
}
|
||||
|
||||
/** Append --hostname to args for gh api / gh auth on GHE */
|
||||
function hostnameArgs(host: string, args: string[]): string[] {
|
||||
if (host !== "github.com") {
|
||||
return [...args, "--hostname", host];
|
||||
}
|
||||
return args;
|
||||
}
|
||||
|
||||
// --- Auth ---
|
||||
|
||||
export async function checkGhAuth(runtime: PRRuntime, host: string): Promise<void> {
|
||||
const result = await runtime.runCommand("gh", hostnameArgs(host, ["auth", "status"]));
|
||||
if (result.exitCode !== 0) {
|
||||
const stderr = result.stderr.trim();
|
||||
const hostHint = host !== "github.com" ? ` --hostname ${host}` : "";
|
||||
throw new Error(
|
||||
`GitHub CLI not authenticated. Run \`gh auth login${hostHint}\` first.\n${stderr}`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export async function getGhUser(runtime: PRRuntime, host: string): Promise<string | null> {
|
||||
try {
|
||||
const result = await runtime.runCommand("gh", hostnameArgs(host, ["api", "user", "--jq", ".login"]));
|
||||
if (result.exitCode === 0 && result.stdout.trim()) {
|
||||
return result.stdout.trim();
|
||||
}
|
||||
return null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// --- Fetch PR ---
|
||||
|
||||
export async function fetchGhPR(
|
||||
runtime: PRRuntime,
|
||||
ref: GhPRRef,
|
||||
): Promise<{ metadata: PRMetadata; rawPatch: string }> {
|
||||
const repo = repoFlag(ref);
|
||||
|
||||
// Fetch diff, metadata, and repository defaults in parallel.
|
||||
const [diffResult, viewResult, repoResult] = await Promise.all([
|
||||
runtime.runCommand("gh", [
|
||||
"pr", "diff", String(ref.number),
|
||||
"--repo", repo,
|
||||
]),
|
||||
runtime.runCommand("gh", [
|
||||
"pr", "view", String(ref.number),
|
||||
"--repo", repo,
|
||||
"--json", "id,title,author,baseRefName,headRefName,baseRefOid,headRefOid,url",
|
||||
]),
|
||||
runtime.runCommand("gh", [
|
||||
"repo", "view", repo,
|
||||
"--json", "defaultBranchRef",
|
||||
"--jq", ".defaultBranchRef.name",
|
||||
]),
|
||||
]);
|
||||
|
||||
if (diffResult.exitCode !== 0) {
|
||||
throw new Error(
|
||||
`Failed to fetch PR diff: ${diffResult.stderr.trim() || `exit code ${diffResult.exitCode}`}`,
|
||||
);
|
||||
}
|
||||
|
||||
if (viewResult.exitCode !== 0) {
|
||||
throw new Error(
|
||||
`Failed to fetch PR metadata: ${viewResult.stderr.trim() || `exit code ${viewResult.exitCode}`}`,
|
||||
);
|
||||
}
|
||||
|
||||
const raw = JSON.parse(viewResult.stdout) as {
|
||||
id: string;
|
||||
title: string;
|
||||
author: { login: string };
|
||||
baseRefName: string;
|
||||
headRefName: string;
|
||||
baseRefOid: string;
|
||||
headRefOid: string;
|
||||
url: string;
|
||||
};
|
||||
|
||||
// Fetch the merge-base SHA — the common ancestor commit GitHub uses to compute the PR diff.
|
||||
// baseSha (baseRefOid) is the tip of the base branch, which may have moved since the branch point.
|
||||
// File contents must be fetched at the merge-base to match the diff hunks.
|
||||
let mergeBaseSha: string | undefined;
|
||||
try {
|
||||
const compareResult = await runtime.runCommand("gh", hostnameArgs(ref.host, [
|
||||
"api",
|
||||
`repos/${ref.owner}/${ref.repo}/compare/${raw.baseRefOid}...${raw.headRefOid}`,
|
||||
"--jq", ".merge_base_commit.sha",
|
||||
]));
|
||||
if (compareResult.exitCode === 0 && compareResult.stdout.trim()) {
|
||||
mergeBaseSha = compareResult.stdout.trim();
|
||||
}
|
||||
} catch { /* fallback to baseSha if compare API fails */ }
|
||||
|
||||
const metadata: PRMetadata = {
|
||||
platform: "github",
|
||||
host: ref.host,
|
||||
owner: ref.owner,
|
||||
repo: ref.repo,
|
||||
number: ref.number,
|
||||
prNodeId: raw.id,
|
||||
title: raw.title,
|
||||
author: raw.author.login,
|
||||
baseBranch: raw.baseRefName,
|
||||
headBranch: raw.headRefName,
|
||||
defaultBranch: repoResult.exitCode === 0 && repoResult.stdout.trim() && repoResult.stdout.trim() !== "null"
|
||||
? repoResult.stdout.trim()
|
||||
: undefined,
|
||||
baseSha: raw.baseRefOid,
|
||||
headSha: raw.headRefOid,
|
||||
mergeBaseSha,
|
||||
url: raw.url,
|
||||
};
|
||||
|
||||
return { metadata, rawPatch: diffResult.stdout };
|
||||
}
|
||||
|
||||
// --- PR Context ---
|
||||
|
||||
const GH_CONTEXT_FIELDS = [
|
||||
"body", "state", "isDraft", "labels",
|
||||
"comments", "reviews", "reviewDecision",
|
||||
"mergeable", "mergeStateStatus",
|
||||
"statusCheckRollup", "closingIssuesReferences",
|
||||
].join(",");
|
||||
|
||||
function parseGhPRContext(raw: Record<string, unknown>): PRContext {
|
||||
const arr = (v: unknown): unknown[] => (Array.isArray(v) ? v : []);
|
||||
const str = (v: unknown): string => (typeof v === "string" ? v : "");
|
||||
const login = (v: unknown): string =>
|
||||
typeof v === "object" && v !== null && "login" in v
|
||||
? String((v as { login: unknown }).login || "")
|
||||
: "";
|
||||
|
||||
return {
|
||||
body: str(raw.body),
|
||||
state: str(raw.state),
|
||||
isDraft: raw.isDraft === true,
|
||||
labels: arr(raw.labels).map((l: any) => ({
|
||||
name: str(l?.name),
|
||||
color: str(l?.color),
|
||||
})),
|
||||
reviewDecision: str(raw.reviewDecision),
|
||||
mergeable: str(raw.mergeable),
|
||||
mergeStateStatus: str(raw.mergeStateStatus),
|
||||
comments: arr(raw.comments).map((c: any) => ({
|
||||
id: str(c?.id),
|
||||
author: login(c?.author),
|
||||
body: str(c?.body),
|
||||
createdAt: str(c?.createdAt),
|
||||
url: str(c?.url),
|
||||
})),
|
||||
reviews: arr(raw.reviews).map((r: any) => ({
|
||||
id: str(r?.id),
|
||||
author: login(r?.author),
|
||||
state: str(r?.state),
|
||||
body: str(r?.body),
|
||||
submittedAt: str(r?.submittedAt),
|
||||
...(r?.url ? { url: str(r.url) } : {}),
|
||||
})),
|
||||
reviewThreads: [], // populated via GraphQL after initial fetch
|
||||
checks: arr(raw.statusCheckRollup).map((c: any) => ({
|
||||
name: str(c?.name),
|
||||
status: str(c?.status),
|
||||
conclusion: typeof c?.conclusion === "string" ? c.conclusion : null,
|
||||
workflowName: str(c?.workflowName),
|
||||
detailsUrl: str(c?.detailsUrl),
|
||||
})),
|
||||
linkedIssues: arr(raw.closingIssuesReferences).map((i: any) => ({
|
||||
number: typeof i?.number === "number" ? i.number : 0,
|
||||
url: str(i?.url),
|
||||
repo: i?.repository
|
||||
? `${login(i.repository.owner)}/${str(i.repository.name)}`
|
||||
: "",
|
||||
})),
|
||||
};
|
||||
}
|
||||
|
||||
export async function fetchGhPRContext(
|
||||
runtime: PRRuntime,
|
||||
ref: GhPRRef,
|
||||
): Promise<PRContext> {
|
||||
const repo = repoFlag(ref);
|
||||
|
||||
const result = await runtime.runCommand("gh", [
|
||||
"pr", "view", String(ref.number),
|
||||
"--repo", repo,
|
||||
"--json", GH_CONTEXT_FIELDS,
|
||||
]);
|
||||
|
||||
if (result.exitCode !== 0) {
|
||||
throw new Error(
|
||||
`Failed to fetch PR context: ${result.stderr.trim() || `exit code ${result.exitCode}`}`,
|
||||
);
|
||||
}
|
||||
|
||||
const raw = JSON.parse(result.stdout) as Record<string, unknown>;
|
||||
const context = parseGhPRContext(raw);
|
||||
|
||||
// Fetch inline review threads via GraphQL (parallel-safe, non-blocking failure)
|
||||
try {
|
||||
context.reviewThreads = await fetchGhReviewThreads(runtime, ref);
|
||||
} catch {
|
||||
// GraphQL may not be available or may fail — degrade gracefully
|
||||
context.reviewThreads = [];
|
||||
}
|
||||
|
||||
return context;
|
||||
}
|
||||
|
||||
// --- Review Threads (GraphQL) ---
|
||||
|
||||
const REVIEW_THREADS_QUERY = `
|
||||
query($owner: String!, $repo: String!, $number: Int!) {
|
||||
repository(owner: $owner, name: $repo) {
|
||||
pullRequest(number: $number) {
|
||||
reviewThreads(first: 100) {
|
||||
nodes {
|
||||
id
|
||||
isResolved
|
||||
isOutdated
|
||||
line
|
||||
startLine
|
||||
path
|
||||
diffSide
|
||||
comments(first: 50) {
|
||||
nodes {
|
||||
id
|
||||
body
|
||||
author { login }
|
||||
createdAt
|
||||
url
|
||||
diffHunk
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}`;
|
||||
|
||||
async function fetchGhReviewThreads(
|
||||
runtime: PRRuntime,
|
||||
ref: GhPRRef,
|
||||
): Promise<PRReviewThread[]> {
|
||||
const result = await runtime.runCommand("gh", hostnameArgs(ref.host, [
|
||||
"api", "graphql",
|
||||
"-f", `query=${REVIEW_THREADS_QUERY}`,
|
||||
"-f", `owner=${ref.owner}`,
|
||||
"-f", `repo=${ref.repo}`,
|
||||
"-F", `number=${ref.number}`,
|
||||
]));
|
||||
|
||||
if (result.exitCode !== 0) return [];
|
||||
|
||||
const data = JSON.parse(result.stdout);
|
||||
const threads = data?.data?.repository?.pullRequest?.reviewThreads?.nodes;
|
||||
if (!Array.isArray(threads)) return [];
|
||||
|
||||
return threads.map((t: any): PRReviewThread => ({
|
||||
id: String(t.id ?? ''),
|
||||
isResolved: t.isResolved === true,
|
||||
isOutdated: t.isOutdated === true,
|
||||
path: String(t.path ?? ''),
|
||||
line: typeof t.line === 'number' ? t.line : null,
|
||||
startLine: typeof t.startLine === 'number' ? t.startLine : null,
|
||||
diffSide: t.diffSide === 'LEFT' || t.diffSide === 'RIGHT' ? t.diffSide : null,
|
||||
comments: Array.isArray(t.comments?.nodes)
|
||||
? t.comments.nodes.map((c: any): PRThreadComment => ({
|
||||
id: String(c.id ?? ''),
|
||||
author: c.author?.login ? String(c.author.login) : '',
|
||||
body: String(c.body ?? ''),
|
||||
createdAt: String(c.createdAt ?? ''),
|
||||
url: String(c.url ?? ''),
|
||||
...(c.diffHunk ? { diffHunk: String(c.diffHunk) } : {}),
|
||||
}))
|
||||
: [],
|
||||
}));
|
||||
}
|
||||
|
||||
// --- File Content ---
|
||||
|
||||
export async function fetchGhPRFileContent(
|
||||
runtime: PRRuntime,
|
||||
ref: GhPRRef,
|
||||
sha: string,
|
||||
filePath: string,
|
||||
): Promise<string | null> {
|
||||
const result = await runtime.runCommand("gh", hostnameArgs(ref.host, [
|
||||
"api",
|
||||
`repos/${ref.owner}/${ref.repo}/contents/${encodeApiFilePath(filePath)}?ref=${sha}`,
|
||||
"--jq", ".content",
|
||||
]));
|
||||
|
||||
if (result.exitCode !== 0) return null;
|
||||
|
||||
const base64Content = result.stdout.trim();
|
||||
if (!base64Content) return null;
|
||||
|
||||
// GitHub returns base64-encoded content with newlines
|
||||
const cleaned = base64Content.replace(/\n/g, "");
|
||||
try {
|
||||
return Buffer.from(cleaned, "base64").toString("utf-8");
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// --- Viewed Files ---
|
||||
|
||||
/**
|
||||
* Fetch the per-file "viewed" state for a GitHub PR via GraphQL.
|
||||
* Returns a map of { filePath: isViewed } where isViewed is true for
|
||||
* VIEWED or DISMISSED states (i.e., the file was reviewed but may need
|
||||
* re-review after new commits).
|
||||
*/
|
||||
export async function fetchGhPRViewedFiles(
|
||||
runtime: PRRuntime,
|
||||
ref: GhPRRef,
|
||||
): Promise<Record<string, boolean>> {
|
||||
const query = `
|
||||
query($owner: String!, $repo: String!, $number: Int!, $cursor: String) {
|
||||
repository(owner: $owner, name: $repo) {
|
||||
pullRequest(number: $number) {
|
||||
files(first: 100, after: $cursor) {
|
||||
nodes {
|
||||
path
|
||||
viewerViewedState
|
||||
}
|
||||
pageInfo {
|
||||
hasNextPage
|
||||
endCursor
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
`;
|
||||
|
||||
const result: Record<string, boolean> = {};
|
||||
let cursor: string | null = null;
|
||||
|
||||
// Paginate through all files (GitHub returns max 100 per page)
|
||||
do {
|
||||
const args = hostnameArgs(ref.host, [
|
||||
"api", "graphql",
|
||||
"-f", `query=${query}`,
|
||||
"-F", `owner=${ref.owner}`,
|
||||
"-F", `repo=${ref.repo}`,
|
||||
"-F", `number=${ref.number}`,
|
||||
]);
|
||||
if (cursor) {
|
||||
args.push("-F", `cursor=${cursor}`);
|
||||
}
|
||||
|
||||
const res = await runtime.runCommand("gh", args);
|
||||
if (res.exitCode !== 0) {
|
||||
throw new Error(
|
||||
`Failed to fetch PR viewed files: ${res.stderr.trim() || `exit code ${res.exitCode}`}`,
|
||||
);
|
||||
}
|
||||
|
||||
const data = JSON.parse(res.stdout) as {
|
||||
data?: {
|
||||
repository?: {
|
||||
pullRequest?: {
|
||||
files?: {
|
||||
nodes: Array<{ path: string; viewerViewedState: string }>;
|
||||
pageInfo: { hasNextPage: boolean; endCursor: string | null };
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
errors?: Array<{ message: string }>;
|
||||
};
|
||||
|
||||
if (data.errors?.length) {
|
||||
throw new Error(`GraphQL error: ${data.errors[0].message}`);
|
||||
}
|
||||
|
||||
const files = data.data?.repository?.pullRequest?.files;
|
||||
if (!files) break;
|
||||
|
||||
for (const node of files.nodes) {
|
||||
// VIEWED = explicitly marked as viewed
|
||||
// DISMISSED = was viewed but new commits arrived (still "was reviewed")
|
||||
result[node.path] = node.viewerViewedState === "VIEWED" || node.viewerViewedState === "DISMISSED";
|
||||
}
|
||||
|
||||
cursor = files.pageInfo.hasNextPage ? files.pageInfo.endCursor : null;
|
||||
} while (cursor !== null);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark or unmark a set of files as viewed in a GitHub PR via GraphQL mutations.
|
||||
* Uses Promise.allSettled so a single file failure doesn't block the rest.
|
||||
* Throws only if ALL mutations fail.
|
||||
*/
|
||||
export async function markGhFilesViewed(
|
||||
runtime: PRRuntime,
|
||||
ref: GhPRRef,
|
||||
prNodeId: string,
|
||||
filePaths: string[],
|
||||
viewed: boolean,
|
||||
): Promise<void> {
|
||||
if (filePaths.length === 0) return;
|
||||
|
||||
const mutationName = viewed ? "markFileAsViewed" : "unmarkFileAsViewed";
|
||||
const mutation = `
|
||||
mutation($id: ID!, $path: String!) {
|
||||
${mutationName}(input: { pullRequestId: $id, path: $path }) {
|
||||
clientMutationId
|
||||
}
|
||||
}
|
||||
`;
|
||||
|
||||
const results = await Promise.allSettled(
|
||||
filePaths.map((path) =>
|
||||
runtime.runCommandWithInput
|
||||
? runtime.runCommand("gh", hostnameArgs(ref.host, [
|
||||
"api", "graphql",
|
||||
"-f", `query=${mutation}`,
|
||||
"-F", `id=${prNodeId}`,
|
||||
"-F", `path=${path}`,
|
||||
]))
|
||||
: Promise.reject(new Error("Runtime does not support commands")),
|
||||
),
|
||||
);
|
||||
|
||||
const failures = results.filter((r): r is PromiseRejectedResult => r.status === "rejected");
|
||||
if (failures.length === filePaths.length) {
|
||||
throw new Error(
|
||||
`Failed to ${mutationName} all files: ${failures[0].reason}`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// --- Submit PR Review ---
|
||||
|
||||
export async function submitGhPRReview(
|
||||
runtime: PRRuntime,
|
||||
ref: GhPRRef,
|
||||
headSha: string,
|
||||
action: "approve" | "comment",
|
||||
body: string,
|
||||
fileComments: PRReviewFileComment[],
|
||||
): Promise<void> {
|
||||
const payload = JSON.stringify({
|
||||
commit_id: headSha,
|
||||
body,
|
||||
event: action === "approve" ? "APPROVE" : "COMMENT",
|
||||
comments: fileComments,
|
||||
});
|
||||
|
||||
const endpoint = `repos/${ref.owner}/${ref.repo}/pulls/${ref.number}/reviews`;
|
||||
|
||||
let result: CommandResult;
|
||||
|
||||
if (runtime.runCommandWithInput) {
|
||||
result = await runtime.runCommandWithInput(
|
||||
"gh",
|
||||
hostnameArgs(ref.host, ["api", endpoint, "--method", "POST", "--input", "-"]),
|
||||
payload,
|
||||
);
|
||||
} else {
|
||||
throw new Error("Runtime does not support stdin input; cannot submit PR review");
|
||||
}
|
||||
|
||||
if (result.exitCode !== 0) {
|
||||
const message = result.stderr.trim() || result.stdout.trim() || `exit code ${result.exitCode}`;
|
||||
throw new Error(`Failed to submit PR review: ${message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// --- Stack Tree (GraphQL) ---
|
||||
|
||||
type StackPRNode = { number: number; title: string; url: string; baseRefName: string; headRefName: string; state: string };
|
||||
|
||||
function stackPRQuery(kind: "head" | "base"): string {
|
||||
const varName = kind === "head" ? "headRefName" : "baseRefName";
|
||||
const first = kind === "head" ? 5 : 10;
|
||||
return `
|
||||
query($owner: String!, $repo: String!, $${varName}: String!) {
|
||||
repository(owner: $owner, name: $repo) {
|
||||
pullRequests(first: ${first}, ${varName}: $${varName}, states: [OPEN, MERGED]) {
|
||||
nodes { number title url baseRefName headRefName state }
|
||||
}
|
||||
}
|
||||
}`;
|
||||
}
|
||||
|
||||
async function queryPRsByRef(
|
||||
runtime: PRRuntime,
|
||||
ref: GhPRRef,
|
||||
kind: "head" | "base",
|
||||
refName: string,
|
||||
): Promise<StackPRNode[]> {
|
||||
const varName = kind === "head" ? "headRefName" : "baseRefName";
|
||||
const result = await runtime.runCommand("gh", hostnameArgs(ref.host, [
|
||||
"api", "graphql",
|
||||
"-f", `query=${stackPRQuery(kind)}`,
|
||||
"-f", `owner=${ref.owner}`,
|
||||
"-f", `repo=${ref.repo}`,
|
||||
"-f", `${varName}=${refName}`,
|
||||
]));
|
||||
if (result.exitCode !== 0) return [];
|
||||
const data = JSON.parse(result.stdout);
|
||||
const prs = data?.data?.repository?.pullRequests?.nodes;
|
||||
return Array.isArray(prs) ? prs : [];
|
||||
}
|
||||
|
||||
/**
|
||||
* Walk up and down the PR stack from the current PR, resolving
|
||||
* PR numbers/titles for every node in the chain.
|
||||
*
|
||||
* Up: walk from currentPR.baseBranch → defaultBranch (ancestors)
|
||||
* Down: walk from currentPR.headBranch → leaf PRs (descendants)
|
||||
*/
|
||||
export async function fetchGhPRStack(
|
||||
runtime: PRRuntime,
|
||||
ref: GhPRRef,
|
||||
metadata: PRMetadata,
|
||||
): Promise<PRStackTree | null> {
|
||||
if (metadata.platform !== "github") return null;
|
||||
const defaultBranch = metadata.defaultBranch;
|
||||
if (!defaultBranch) return null;
|
||||
|
||||
const currentNode: PRStackNode = {
|
||||
branch: metadata.headBranch,
|
||||
number: metadata.number,
|
||||
title: metadata.title,
|
||||
url: metadata.url,
|
||||
isCurrent: true,
|
||||
isDefaultBranch: false,
|
||||
};
|
||||
|
||||
// Walk up: find the PR whose headRefName === baseBranch, repeat
|
||||
const ancestors: PRStackNode[] = [];
|
||||
let nextHead = metadata.baseBranch;
|
||||
const maxDepth = 10;
|
||||
|
||||
for (let i = 0; i < maxDepth; i++) {
|
||||
if (nextHead === defaultBranch) break;
|
||||
|
||||
const prs = await queryPRsByRef(runtime, ref, "head", nextHead);
|
||||
if (prs.length === 0) {
|
||||
ancestors.push({ branch: nextHead, isCurrent: false, isDefaultBranch: false });
|
||||
break;
|
||||
}
|
||||
|
||||
const pr = prs[0];
|
||||
ancestors.push({
|
||||
branch: pr.headRefName,
|
||||
number: pr.number,
|
||||
title: pr.title,
|
||||
url: pr.url,
|
||||
isCurrent: false,
|
||||
isDefaultBranch: false,
|
||||
state: (pr.state === 'MERGED' ? 'merged' : pr.state === 'CLOSED' ? 'closed' : 'open') as PRStackNode['state'],
|
||||
});
|
||||
nextHead = pr.baseRefName;
|
||||
}
|
||||
|
||||
// Walk down: find PRs whose baseRefName === current headBranch, repeat
|
||||
const descendants: PRStackNode[] = [];
|
||||
let nextBase = metadata.headBranch;
|
||||
|
||||
for (let i = 0; i < maxDepth; i++) {
|
||||
const prs = await queryPRsByRef(runtime, ref, "base", nextBase);
|
||||
if (prs.length === 0) break;
|
||||
|
||||
const pr = prs[0];
|
||||
descendants.push({
|
||||
branch: pr.headRefName,
|
||||
number: pr.number,
|
||||
title: pr.title,
|
||||
url: pr.url,
|
||||
isCurrent: false,
|
||||
isDefaultBranch: false,
|
||||
state: (pr.state === 'MERGED' ? 'merged' : pr.state === 'CLOSED' ? 'closed' : 'open') as PRStackNode['state'],
|
||||
});
|
||||
nextBase = pr.headRefName;
|
||||
}
|
||||
|
||||
// Build tree: defaultBranch → ancestors (reversed) → current → descendants
|
||||
const nodes: PRStackNode[] = [
|
||||
{ branch: defaultBranch, isCurrent: false, isDefaultBranch: true },
|
||||
...ancestors.reverse(),
|
||||
currentNode,
|
||||
...descendants,
|
||||
];
|
||||
|
||||
return { nodes };
|
||||
}
|
||||
|
||||
// --- PR List ---
|
||||
|
||||
export async function fetchGhPRList(
|
||||
runtime: PRRuntime,
|
||||
ref: GhPRRef,
|
||||
): Promise<PRListItem[]> {
|
||||
const result = await runtime.runCommand("gh", [
|
||||
"pr", "list",
|
||||
"--repo", repoFlag(ref),
|
||||
"--json", "number,title,author,url,baseRefName,state",
|
||||
"--limit", "30",
|
||||
"--state", "all",
|
||||
]);
|
||||
|
||||
if (result.exitCode !== 0) return [];
|
||||
|
||||
const raw = JSON.parse(result.stdout) as Array<{
|
||||
number: number;
|
||||
title: string;
|
||||
author: { login: string };
|
||||
url: string;
|
||||
baseRefName: string;
|
||||
state: string;
|
||||
}>;
|
||||
|
||||
return raw.map((pr) => ({
|
||||
id: String(pr.number),
|
||||
number: pr.number,
|
||||
title: pr.title,
|
||||
author: pr.author.login,
|
||||
url: pr.url,
|
||||
baseBranch: pr.baseRefName,
|
||||
state: (pr.state === "OPEN" ? "open" : pr.state === "MERGED" ? "merged" : "closed") as PRListItem["state"],
|
||||
}));
|
||||
}
|
||||
521
extensions/plannotator/generated/pr-gitlab.ts
Normal file
521
extensions/plannotator/generated/pr-gitlab.ts
Normal file
@@ -0,0 +1,521 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/pr-gitlab.ts
|
||||
/**
|
||||
* GitLab-specific MR provider implementation.
|
||||
*
|
||||
* All functions use the `glab` CLI via the PRRuntime abstraction.
|
||||
* Self-hosted instances are supported via the --hostname flag.
|
||||
*/
|
||||
|
||||
import type { PRRuntime, PRMetadata, PRContext, PRReviewFileComment, CommandResult } from "./pr-provider";
|
||||
import { encodeApiFilePath } from "./pr-provider";
|
||||
|
||||
// GitLab-specific MRRef shape (used internally)
|
||||
interface GlMRRef {
|
||||
platform: "gitlab";
|
||||
host: string;
|
||||
projectPath: string;
|
||||
iid: number;
|
||||
}
|
||||
|
||||
/** URL-encode the project path for GitLab API (group/project → group%2Fproject) */
|
||||
function encodeProject(projectPath: string): string {
|
||||
return encodeURIComponent(projectPath);
|
||||
}
|
||||
|
||||
/** Build glab API args with optional --hostname for self-hosted */
|
||||
function apiArgs(host: string, endpoint: string, extra: string[] = []): string[] {
|
||||
const args = ["api", endpoint, ...extra];
|
||||
if (host !== "gitlab.com") {
|
||||
args.push("--hostname", host);
|
||||
}
|
||||
return args;
|
||||
}
|
||||
|
||||
/** Shape of each entry from the GitLab merge_request diffs API */
|
||||
interface GitLabDiffEntry {
|
||||
diff: string;
|
||||
old_path: string;
|
||||
new_path: string;
|
||||
new_file: boolean;
|
||||
deleted_file: boolean;
|
||||
renamed_file: boolean;
|
||||
}
|
||||
|
||||
/** Parse JSON array from glab api --paginate output (already merged by glab) */
|
||||
function parsePaginatedArray<T>(stdout: string): T[] {
|
||||
return JSON.parse(stdout) as T[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Reconstruct a unified patch from GitLab's merge_request diffs API response.
|
||||
*
|
||||
* Each entry has: { diff, old_path, new_path, new_file, deleted_file, renamed_file }
|
||||
* We construct proper `diff --git` headers that the UI parser expects.
|
||||
*/
|
||||
function reconstructPatch(diffs: GitLabDiffEntry[]): string {
|
||||
const parts: string[] = [];
|
||||
|
||||
for (const d of diffs) {
|
||||
const aPath = d.new_file ? "/dev/null" : `a/${d.old_path}`;
|
||||
const bPath = d.deleted_file ? "/dev/null" : `b/${d.new_path}`;
|
||||
const displayOld = d.new_file ? d.new_path : d.old_path;
|
||||
const displayNew = d.deleted_file ? d.old_path : d.new_path;
|
||||
|
||||
let header = `diff --git a/${displayOld} b/${displayNew}`;
|
||||
if (d.renamed_file) {
|
||||
header += `\nrename from ${d.old_path}\nrename to ${d.new_path}`;
|
||||
}
|
||||
if (d.new_file) {
|
||||
header += "\nnew file mode 100644";
|
||||
}
|
||||
if (d.deleted_file) {
|
||||
header += "\ndeleted file mode 100644";
|
||||
}
|
||||
|
||||
parts.push(`${header}\n--- ${aPath}\n+++ ${bPath}\n${d.diff}`);
|
||||
}
|
||||
|
||||
return parts.join("");
|
||||
}
|
||||
|
||||
// --- Auth ---
|
||||
|
||||
export async function checkGlAuth(runtime: PRRuntime, host: string): Promise<void> {
|
||||
const args = ["auth", "status"];
|
||||
if (host !== "gitlab.com") {
|
||||
args.push("--hostname", host);
|
||||
}
|
||||
const result = await runtime.runCommand("glab", args);
|
||||
if (result.exitCode !== 0) {
|
||||
const stderr = result.stderr.trim();
|
||||
const hostHint = host !== "gitlab.com" ? ` --hostname ${host}` : "";
|
||||
throw new Error(
|
||||
`GitLab CLI not authenticated. Run \`glab auth login${hostHint}\` first.\n${stderr}`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export async function getGlUser(runtime: PRRuntime, host: string): Promise<string | null> {
|
||||
try {
|
||||
const result = await runtime.runCommand("glab", apiArgs(host, "/user"));
|
||||
if (result.exitCode === 0 && result.stdout.trim()) {
|
||||
const user = JSON.parse(result.stdout) as { username?: string };
|
||||
return user.username ?? null;
|
||||
}
|
||||
return null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// --- Fetch MR ---
|
||||
|
||||
export async function fetchGlMR(
|
||||
runtime: PRRuntime,
|
||||
ref: GlMRRef,
|
||||
): Promise<{ metadata: PRMetadata; rawPatch: string }> {
|
||||
const encoded = encodeProject(ref.projectPath);
|
||||
|
||||
// Fetch diff and metadata in parallel via glab api (supports --hostname for self-hosted)
|
||||
const [diffResult, viewResult] = await Promise.all([
|
||||
runtime.runCommand("glab", apiArgs(ref.host, `projects/${encoded}/merge_requests/${ref.iid}/diffs?per_page=100`, ["--paginate"])),
|
||||
runtime.runCommand("glab", apiArgs(ref.host, `projects/${encoded}/merge_requests/${ref.iid}`)),
|
||||
]);
|
||||
|
||||
if (diffResult.exitCode !== 0) {
|
||||
throw new Error(
|
||||
`Failed to fetch MR diff: ${diffResult.stderr.trim() || `exit code ${diffResult.exitCode}`}`,
|
||||
);
|
||||
}
|
||||
|
||||
if (viewResult.exitCode !== 0) {
|
||||
throw new Error(
|
||||
`Failed to fetch MR metadata: ${viewResult.stderr.trim() || `exit code ${viewResult.exitCode}`}`,
|
||||
);
|
||||
}
|
||||
|
||||
// Reconstruct unified patch from structured API response
|
||||
const diffs = parsePaginatedArray<GitLabDiffEntry>(diffResult.stdout);
|
||||
const rawPatch = reconstructPatch(diffs);
|
||||
|
||||
const raw = JSON.parse(viewResult.stdout) as {
|
||||
title: string;
|
||||
author: { username: string };
|
||||
source_branch: string;
|
||||
target_branch: string;
|
||||
target_project_id?: number;
|
||||
diff_refs: { base_sha: string; head_sha: string; start_sha: string } | null;
|
||||
web_url: string;
|
||||
};
|
||||
|
||||
if (!raw.diff_refs) {
|
||||
throw new Error("MR has no diff refs — it may have been merged or the source branch deleted.");
|
||||
}
|
||||
|
||||
let defaultBranch: string | undefined;
|
||||
const projectEndpoint = typeof raw.target_project_id === "number"
|
||||
? `projects/${raw.target_project_id}`
|
||||
: `projects/${encoded}`;
|
||||
try {
|
||||
const projectResult = await runtime.runCommand("glab", apiArgs(ref.host, projectEndpoint));
|
||||
if (projectResult.exitCode === 0 && projectResult.stdout.trim()) {
|
||||
const project = JSON.parse(projectResult.stdout) as { default_branch?: string };
|
||||
defaultBranch = project.default_branch;
|
||||
}
|
||||
} catch { /* default branch is best-effort metadata */ }
|
||||
|
||||
const metadata: PRMetadata = {
|
||||
platform: "gitlab",
|
||||
host: ref.host,
|
||||
projectPath: ref.projectPath,
|
||||
iid: ref.iid,
|
||||
title: raw.title,
|
||||
author: raw.author.username,
|
||||
baseBranch: raw.target_branch,
|
||||
headBranch: raw.source_branch,
|
||||
defaultBranch,
|
||||
baseSha: raw.diff_refs.base_sha,
|
||||
headSha: raw.diff_refs.head_sha,
|
||||
url: raw.web_url,
|
||||
};
|
||||
|
||||
return { metadata, rawPatch };
|
||||
}
|
||||
|
||||
// --- MR Context ---
|
||||
|
||||
export async function fetchGlMRContext(
|
||||
runtime: PRRuntime,
|
||||
ref: GlMRRef,
|
||||
): Promise<PRContext> {
|
||||
const encoded = encodeProject(ref.projectPath);
|
||||
const mrEndpoint = `projects/${encoded}/merge_requests/${ref.iid}`;
|
||||
|
||||
// Fetch all context in parallel
|
||||
const [mrResult, notesResult, approvalsResult, pipelinesResult, issuesResult] = await Promise.all([
|
||||
runtime.runCommand("glab", apiArgs(ref.host, mrEndpoint)),
|
||||
runtime.runCommand("glab", apiArgs(ref.host, `${mrEndpoint}/notes?sort=asc&per_page=100`)),
|
||||
runtime.runCommand("glab", apiArgs(ref.host, `${mrEndpoint}/approvals`)),
|
||||
runtime.runCommand("glab", apiArgs(ref.host, `${mrEndpoint}/pipelines?per_page=5`)),
|
||||
runtime.runCommand("glab", apiArgs(ref.host, `${mrEndpoint}/closes_issues`)),
|
||||
]);
|
||||
|
||||
const str = (v: unknown): string => (typeof v === "string" ? v : "");
|
||||
const arr = (v: unknown): unknown[] => (Array.isArray(v) ? v : []);
|
||||
|
||||
// --- MR details ---
|
||||
let mr: Record<string, unknown> = {};
|
||||
if (mrResult.exitCode === 0) {
|
||||
try { mr = JSON.parse(mrResult.stdout); } catch { /* non-JSON response */ }
|
||||
}
|
||||
|
||||
// Normalize state: GitLab uses "opened"/"closed"/"merged" → uppercase
|
||||
const glState = str(mr.state);
|
||||
const state = glState === "opened" ? "OPEN" : glState.toUpperCase();
|
||||
|
||||
const isDraft = mr.draft === true
|
||||
|| (typeof mr.title === "string" && /^(Draft:|WIP:)/i.test(mr.title));
|
||||
|
||||
const labels = arr(mr.labels).map((l: any) => {
|
||||
if (typeof l === "string") return { name: l, color: "" };
|
||||
return { name: str(l?.name), color: str(l?.color) };
|
||||
});
|
||||
|
||||
// GitLab merge_status values
|
||||
const mergeStatus = str(mr.merge_status);
|
||||
const detailedStatus = str(mr.detailed_merge_status);
|
||||
const mergeable = mergeStatus === "can_be_merged" ? "MERGEABLE"
|
||||
: mergeStatus === "cannot_be_merged" ? "CONFLICTING"
|
||||
: mergeStatus === "unchecked" ? "UNKNOWN"
|
||||
: mergeStatus.toUpperCase();
|
||||
|
||||
// Map GitLab detailed_merge_status to GitHub-compatible merge state enums
|
||||
const mergeStateMap: Record<string, string> = {
|
||||
mergeable: "CLEAN",
|
||||
broken_status: "DIRTY",
|
||||
checking: "UNKNOWN",
|
||||
unchecked: "UNKNOWN",
|
||||
ci_must_pass: "BLOCKED",
|
||||
ci_still_running: "BLOCKED",
|
||||
discussions_not_resolved: "BLOCKED",
|
||||
draft_status: "BLOCKED",
|
||||
blocked_status: "BLOCKED",
|
||||
not_approved: "BLOCKED",
|
||||
not_open: "DIRTY",
|
||||
need_rebase: "BEHIND",
|
||||
conflict: "DIRTY",
|
||||
jira_association_missing: "BLOCKED",
|
||||
};
|
||||
const mergeStateStatus = detailedStatus
|
||||
? (mergeStateMap[detailedStatus] ?? detailedStatus.toUpperCase())
|
||||
: mergeable;
|
||||
|
||||
// --- Notes (comments) ---
|
||||
const notes: PRContext["comments"] = [];
|
||||
if (notesResult.exitCode === 0) {
|
||||
try {
|
||||
const rawNotes = JSON.parse(notesResult.stdout) as any[];
|
||||
for (const n of rawNotes) {
|
||||
if (n.system) continue;
|
||||
notes.push({
|
||||
id: String(n.id ?? ""),
|
||||
author: str(n.author?.username),
|
||||
body: str(n.body),
|
||||
createdAt: str(n.created_at),
|
||||
url: str(n.web_url) || "",
|
||||
});
|
||||
}
|
||||
} catch { /* non-JSON response */ }
|
||||
}
|
||||
|
||||
// --- Approvals ---
|
||||
let reviewDecision = "";
|
||||
const reviews: PRContext["reviews"] = [];
|
||||
if (approvalsResult.exitCode === 0) {
|
||||
try {
|
||||
const approvals = JSON.parse(approvalsResult.stdout) as Record<string, unknown>;
|
||||
const approvedBy = arr(approvals.approved_by);
|
||||
const approved = approvals.approved === true || approvedBy.length > 0;
|
||||
reviewDecision = approved ? "APPROVED" : "";
|
||||
|
||||
for (const a of approvedBy) {
|
||||
const user = (a as any)?.user;
|
||||
if (!user) continue;
|
||||
reviews.push({
|
||||
id: String(user.id ?? ""),
|
||||
author: str(user.username),
|
||||
state: "APPROVED",
|
||||
body: "",
|
||||
submittedAt: "",
|
||||
});
|
||||
}
|
||||
} catch { /* non-JSON response */ }
|
||||
}
|
||||
|
||||
// --- Pipelines → Checks ---
|
||||
const checks: PRContext["checks"] = [];
|
||||
if (pipelinesResult.exitCode === 0) {
|
||||
try {
|
||||
const pipelines = JSON.parse(pipelinesResult.stdout) as any[];
|
||||
if (pipelines.length > 0) {
|
||||
const latest = pipelines[0];
|
||||
const jobsResult = await runtime.runCommand(
|
||||
"glab",
|
||||
apiArgs(ref.host, `projects/${encoded}/pipelines/${latest.id}/jobs?per_page=100`),
|
||||
);
|
||||
if (jobsResult.exitCode === 0) {
|
||||
try {
|
||||
const jobs = JSON.parse(jobsResult.stdout) as any[];
|
||||
for (const job of jobs) {
|
||||
const jobStatus = str(job.status);
|
||||
const isComplete = ["success", "failed", "canceled", "skipped"].includes(jobStatus);
|
||||
// Map GitLab job statuses to GitHub-compatible conclusion enums
|
||||
const conclusionMap: Record<string, string> = {
|
||||
success: "SUCCESS",
|
||||
failed: "FAILURE",
|
||||
canceled: "NEUTRAL",
|
||||
skipped: "SKIPPED",
|
||||
};
|
||||
checks.push({
|
||||
name: str(job.name),
|
||||
status: isComplete ? "COMPLETED" : "IN_PROGRESS",
|
||||
conclusion: isComplete ? (conclusionMap[jobStatus] ?? jobStatus.toUpperCase()) : null,
|
||||
workflowName: str(latest.ref),
|
||||
detailsUrl: str(job.web_url),
|
||||
});
|
||||
}
|
||||
} catch { /* non-JSON jobs response */ }
|
||||
}
|
||||
}
|
||||
} catch { /* non-JSON pipelines response */ }
|
||||
}
|
||||
|
||||
// --- Linked Issues ---
|
||||
const linkedIssues: PRContext["linkedIssues"] = [];
|
||||
if (issuesResult.exitCode === 0) {
|
||||
try {
|
||||
const issues = JSON.parse(issuesResult.stdout) as any[];
|
||||
for (const i of issues) {
|
||||
linkedIssues.push({
|
||||
number: typeof i.iid === "number" ? i.iid : 0,
|
||||
url: str(i.web_url),
|
||||
repo: ref.projectPath,
|
||||
});
|
||||
}
|
||||
} catch {
|
||||
// Non-critical — some GitLab versions may not support this endpoint
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
body: str(mr.description),
|
||||
state,
|
||||
isDraft,
|
||||
labels,
|
||||
reviewDecision,
|
||||
mergeable,
|
||||
mergeStateStatus,
|
||||
comments: notes,
|
||||
reviews,
|
||||
reviewThreads: [], // TODO: parse DiffNote positions from notes for thread support
|
||||
checks,
|
||||
linkedIssues,
|
||||
};
|
||||
}
|
||||
|
||||
// --- File Content ---
|
||||
|
||||
export async function fetchGlFileContent(
|
||||
runtime: PRRuntime,
|
||||
ref: GlMRRef,
|
||||
sha: string,
|
||||
filePath: string,
|
||||
): Promise<string | null> {
|
||||
const encoded = encodeProject(ref.projectPath);
|
||||
const encodedPath = encodeApiFilePath(filePath);
|
||||
|
||||
const result = await runtime.runCommand(
|
||||
"glab",
|
||||
apiArgs(ref.host, `projects/${encoded}/repository/files/${encodedPath}/raw?ref=${sha}`),
|
||||
);
|
||||
|
||||
if (result.exitCode !== 0) return null;
|
||||
|
||||
// GitLab returns raw file content (no base64 encoding)
|
||||
return result.stdout;
|
||||
}
|
||||
|
||||
// --- Submit MR Review ---
|
||||
|
||||
export async function submitGlMRReview(
|
||||
runtime: PRRuntime,
|
||||
ref: GlMRRef,
|
||||
headSha: string,
|
||||
action: "approve" | "comment",
|
||||
body: string,
|
||||
fileComments: PRReviewFileComment[],
|
||||
): Promise<void> {
|
||||
if (!runtime.runCommandWithInput) {
|
||||
throw new Error("Runtime does not support stdin input; cannot submit MR review");
|
||||
}
|
||||
|
||||
const encoded = encodeProject(ref.projectPath);
|
||||
const mrEndpoint = `projects/${encoded}/merge_requests/${ref.iid}`;
|
||||
|
||||
// Fetch base SHA for position context (needed for line comments)
|
||||
// We use the headSha passed in and derive baseSha from MR metadata
|
||||
// The caller already has this info, but GitLab's discussion API needs start_sha too
|
||||
|
||||
// 1. Post general body as a note (if non-empty)
|
||||
if (body && body.trim()) {
|
||||
const notePayload = JSON.stringify({ body: body.trim() });
|
||||
const noteResult = await runtime.runCommandWithInput(
|
||||
"glab",
|
||||
apiArgs(ref.host, `${mrEndpoint}/notes`, ["--method", "POST", "--input", "-", "-H", "Content-Type:application/json"]),
|
||||
notePayload,
|
||||
);
|
||||
if (noteResult.exitCode !== 0) {
|
||||
const msg = noteResult.stderr.trim() || noteResult.stdout.trim() || `exit code ${noteResult.exitCode}`;
|
||||
throw new Error(`Failed to post MR note: ${msg}`);
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Post inline file comments as discussions with position
|
||||
if (fileComments.length > 0) {
|
||||
// We need the MR's diff_refs for the position SHAs.
|
||||
const mrResult = await runtime.runCommand(
|
||||
"glab",
|
||||
apiArgs(ref.host, mrEndpoint),
|
||||
);
|
||||
let baseSha = headSha; // fallback
|
||||
let startSha = headSha;
|
||||
if (mrResult.exitCode === 0 && mrResult.stdout.trim()) {
|
||||
try {
|
||||
const mrData = JSON.parse(mrResult.stdout) as { diff_refs?: { base_sha: string; start_sha: string; head_sha: string } };
|
||||
if (mrData.diff_refs) {
|
||||
baseSha = mrData.diff_refs.base_sha;
|
||||
startSha = mrData.diff_refs.start_sha;
|
||||
}
|
||||
} catch {
|
||||
// Use fallbacks
|
||||
}
|
||||
}
|
||||
|
||||
const errors: string[] = [];
|
||||
|
||||
// Submit comments in parallel
|
||||
const results = await Promise.allSettled(
|
||||
fileComments.map(async (comment) => {
|
||||
const isOldSide = comment.side === "LEFT";
|
||||
const position: Record<string, unknown> = {
|
||||
position_type: "text",
|
||||
base_sha: baseSha,
|
||||
head_sha: headSha,
|
||||
start_sha: startSha,
|
||||
new_path: comment.path,
|
||||
old_path: comment.path,
|
||||
};
|
||||
|
||||
if (isOldSide) {
|
||||
position.old_line = comment.line;
|
||||
} else {
|
||||
position.new_line = comment.line;
|
||||
}
|
||||
|
||||
// Multi-line range support
|
||||
if (comment.start_line != null && comment.start_line !== comment.line) {
|
||||
const startIsOld = (comment.start_side ?? comment.side) === "LEFT";
|
||||
const startEntry: Record<string, unknown> = { type: startIsOld ? "old" : "new" };
|
||||
if (startIsOld) startEntry.old_line = comment.start_line;
|
||||
else startEntry.new_line = comment.start_line;
|
||||
|
||||
const endEntry: Record<string, unknown> = { type: isOldSide ? "old" : "new" };
|
||||
if (isOldSide) endEntry.old_line = comment.line;
|
||||
else endEntry.new_line = comment.line;
|
||||
|
||||
position.line_range = { start: startEntry, end: endEntry };
|
||||
}
|
||||
|
||||
const payload = JSON.stringify({ body: comment.body, position });
|
||||
const res = await runtime.runCommandWithInput!(
|
||||
"glab",
|
||||
apiArgs(ref.host, `${mrEndpoint}/discussions`, ["--method", "POST", "--input", "-", "-H", "Content-Type:application/json"]),
|
||||
payload,
|
||||
);
|
||||
|
||||
if (res.exitCode !== 0) {
|
||||
const msg = res.stderr.trim() || res.stdout.trim() || `exit code ${res.exitCode}`;
|
||||
throw new Error(`${comment.path}:${comment.line}: ${msg}`);
|
||||
}
|
||||
}),
|
||||
);
|
||||
|
||||
for (const r of results) {
|
||||
if (r.status === "rejected") {
|
||||
errors.push(r.reason instanceof Error ? r.reason.message : String(r.reason));
|
||||
}
|
||||
}
|
||||
|
||||
if (errors.length > 0 && errors.length === fileComments.length) {
|
||||
// All failed — throw
|
||||
throw new Error(`Failed to post inline comments:\n${errors.join("\n")}`);
|
||||
}
|
||||
// Partial failures: some comments posted, some didn't — log but don't throw
|
||||
if (errors.length > 0) {
|
||||
console.error(`Warning: ${errors.length}/${fileComments.length} inline comments failed:\n${errors.join("\n")}`);
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Approve if requested
|
||||
if (action === "approve") {
|
||||
const approveResult = await runtime.runCommandWithInput(
|
||||
"glab",
|
||||
apiArgs(ref.host, `${mrEndpoint}/approve`, ["--method", "POST", "--input", "-", "-H", "Content-Type:application/json"]),
|
||||
"{}",
|
||||
);
|
||||
if (approveResult.exitCode !== 0) {
|
||||
const msg = approveResult.stderr.trim() || approveResult.stdout.trim() || `exit code ${approveResult.exitCode}`;
|
||||
throw new Error(`Failed to approve MR: ${msg}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
432
extensions/plannotator/generated/pr-provider.ts
Normal file
432
extensions/plannotator/generated/pr-provider.ts
Normal file
@@ -0,0 +1,432 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/pr-provider.ts
|
||||
/**
|
||||
* Runtime-agnostic PR provider shared by Bun runtimes and Pi.
|
||||
*
|
||||
* Dispatches to platform-specific implementations (GitHub, GitLab)
|
||||
* based on the `platform` field in PRRef/PRMetadata.
|
||||
*
|
||||
* Same pattern as review-core.ts: a runtime interface abstracts subprocess
|
||||
* execution so the logic is reusable across Bun and Node/jiti.
|
||||
*/
|
||||
|
||||
import { checkGhAuth, getGhUser, fetchGhPR, fetchGhPRContext, fetchGhPRFileContent, submitGhPRReview, fetchGhPRViewedFiles, markGhFilesViewed, fetchGhPRStack, fetchGhPRList } from "./pr-github";
|
||||
import { checkGlAuth, getGlUser, fetchGlMR, fetchGlMRContext, fetchGlFileContent, submitGlMRReview } from "./pr-gitlab";
|
||||
|
||||
// --- Runtime Types ---
|
||||
|
||||
export interface CommandResult {
|
||||
stdout: string;
|
||||
stderr: string;
|
||||
exitCode: number;
|
||||
}
|
||||
|
||||
export interface PRRuntime {
|
||||
runCommand: (
|
||||
cmd: string,
|
||||
args: string[],
|
||||
) => Promise<CommandResult>;
|
||||
runCommandWithInput?: (
|
||||
cmd: string,
|
||||
args: string[],
|
||||
input: string,
|
||||
) => Promise<CommandResult>;
|
||||
}
|
||||
|
||||
// --- Platform Types ---
|
||||
|
||||
export type Platform = "github" | "gitlab";
|
||||
|
||||
/** GitHub PR reference */
|
||||
export interface GithubPRRef {
|
||||
platform: "github";
|
||||
host: string;
|
||||
owner: string;
|
||||
repo: string;
|
||||
number: number;
|
||||
}
|
||||
|
||||
/** GitLab MR reference */
|
||||
export interface GitlabMRRef {
|
||||
platform: "gitlab";
|
||||
host: string;
|
||||
projectPath: string;
|
||||
iid: number;
|
||||
}
|
||||
|
||||
/** Discriminated union — auto-detected from URL */
|
||||
export type PRRef = GithubPRRef | GitlabMRRef;
|
||||
|
||||
/** GitHub PR metadata */
|
||||
export interface GithubPRMetadata {
|
||||
platform: "github";
|
||||
host: string;
|
||||
owner: string;
|
||||
repo: string;
|
||||
number: number;
|
||||
/** GraphQL node ID for the PR — used for markFileAsViewed mutations */
|
||||
prNodeId?: string;
|
||||
title: string;
|
||||
author: string;
|
||||
baseBranch: string;
|
||||
headBranch: string;
|
||||
/** Repository default branch, used to infer whether this PR targets another PR branch. */
|
||||
defaultBranch?: string;
|
||||
baseSha: string;
|
||||
headSha: string;
|
||||
/** Merge-base SHA — the common ancestor commit used to compute the PR diff. Differs from baseSha when the base branch has moved. */
|
||||
mergeBaseSha?: string;
|
||||
url: string;
|
||||
}
|
||||
|
||||
/** GitLab MR metadata */
|
||||
export interface GitlabMRMetadata {
|
||||
platform: "gitlab";
|
||||
host: string;
|
||||
projectPath: string;
|
||||
iid: number;
|
||||
title: string;
|
||||
author: string;
|
||||
baseBranch: string;
|
||||
headBranch: string;
|
||||
/** Project default branch, used to infer whether this MR targets another MR branch. */
|
||||
defaultBranch?: string;
|
||||
baseSha: string;
|
||||
headSha: string;
|
||||
/** Merge-base SHA — the common ancestor commit used to compute the MR diff. */
|
||||
mergeBaseSha?: string;
|
||||
url: string;
|
||||
}
|
||||
|
||||
/** Discriminated union — downstream gets type narrowing for free */
|
||||
export type PRMetadata = GithubPRMetadata | GitlabMRMetadata;
|
||||
|
||||
// --- PR Context Types (platform-agnostic) ---
|
||||
|
||||
export interface PRComment {
|
||||
id: string;
|
||||
author: string;
|
||||
body: string;
|
||||
createdAt: string;
|
||||
url: string;
|
||||
}
|
||||
|
||||
export interface PRReview {
|
||||
id: string;
|
||||
author: string;
|
||||
state: string;
|
||||
body: string;
|
||||
submittedAt: string;
|
||||
url?: string;
|
||||
}
|
||||
|
||||
export interface PRCheck {
|
||||
name: string;
|
||||
status: string;
|
||||
conclusion: string | null;
|
||||
workflowName: string;
|
||||
detailsUrl: string;
|
||||
}
|
||||
|
||||
export interface PRLinkedIssue {
|
||||
number: number;
|
||||
url: string;
|
||||
repo: string;
|
||||
}
|
||||
|
||||
export interface PRThreadComment {
|
||||
id: string;
|
||||
author: string;
|
||||
body: string;
|
||||
createdAt: string;
|
||||
url: string;
|
||||
diffHunk?: string;
|
||||
}
|
||||
|
||||
export interface PRReviewThread {
|
||||
id: string;
|
||||
isResolved: boolean;
|
||||
isOutdated: boolean;
|
||||
path: string;
|
||||
line: number | null;
|
||||
startLine: number | null;
|
||||
diffSide: 'LEFT' | 'RIGHT' | null;
|
||||
comments: PRThreadComment[];
|
||||
}
|
||||
|
||||
export interface PRContext {
|
||||
body: string;
|
||||
state: string;
|
||||
isDraft: boolean;
|
||||
labels: Array<{ name: string; color: string }>;
|
||||
reviewDecision: string;
|
||||
mergeable: string;
|
||||
mergeStateStatus: string;
|
||||
comments: PRComment[];
|
||||
reviews: PRReview[];
|
||||
reviewThreads: PRReviewThread[];
|
||||
checks: PRCheck[];
|
||||
linkedIssues: PRLinkedIssue[];
|
||||
}
|
||||
|
||||
export interface PRReviewFileComment {
|
||||
path: string;
|
||||
line: number;
|
||||
side: "LEFT" | "RIGHT";
|
||||
body: string;
|
||||
start_line?: number;
|
||||
start_side?: "LEFT" | "RIGHT";
|
||||
}
|
||||
|
||||
export type PRDiffScope = "layer" | "full-stack";
|
||||
|
||||
export interface PRDiffScopeOption {
|
||||
id: PRDiffScope;
|
||||
label: string;
|
||||
description: string;
|
||||
enabled: boolean;
|
||||
}
|
||||
|
||||
export interface PRStackInfo {
|
||||
isStacked: boolean;
|
||||
baseBranch: string;
|
||||
defaultBranch?: string;
|
||||
label: string;
|
||||
source: "branch-inferred" | "tree-discovered" | "github-native" | "gitlab-native" | "graphite" | "ghstack";
|
||||
}
|
||||
|
||||
export interface PRStackNode {
|
||||
branch: string;
|
||||
number?: number;
|
||||
title?: string;
|
||||
url?: string;
|
||||
isCurrent: boolean;
|
||||
isDefaultBranch: boolean;
|
||||
state?: 'open' | 'merged' | 'closed';
|
||||
}
|
||||
|
||||
export interface PRStackTree {
|
||||
nodes: PRStackNode[];
|
||||
}
|
||||
|
||||
export interface PRListItem {
|
||||
id: string;
|
||||
number: number;
|
||||
title: string;
|
||||
author: string;
|
||||
url: string;
|
||||
baseBranch: string;
|
||||
state: 'open' | 'closed' | 'merged';
|
||||
}
|
||||
|
||||
// --- Label Helpers ---
|
||||
// Accept either PRRef or PRMetadata (both have `platform` discriminant)
|
||||
|
||||
type HasPlatform = PRRef | PRMetadata;
|
||||
|
||||
/** "GitHub" or "GitLab" */
|
||||
export function getPlatformLabel(m: HasPlatform): string {
|
||||
return m.platform === "github" ? "GitHub" : "GitLab";
|
||||
}
|
||||
|
||||
/** "PR" or "MR" */
|
||||
export function getMRLabel(m: HasPlatform): string {
|
||||
return m.platform === "github" ? "PR" : "MR";
|
||||
}
|
||||
|
||||
/** "#123" or "!42" */
|
||||
export function getMRNumberLabel(m: HasPlatform): string {
|
||||
if (m.platform === "github") return `#${m.number}`;
|
||||
return `!${m.iid}`;
|
||||
}
|
||||
|
||||
/** "owner/repo" or "group/project" */
|
||||
export function getDisplayRepo(m: HasPlatform): string {
|
||||
if (m.platform === "github") return `${m.owner}/${m.repo}`;
|
||||
return m.projectPath;
|
||||
}
|
||||
|
||||
/** Reconstruct a PRRef from metadata */
|
||||
export function prRefFromMetadata(m: PRMetadata): PRRef {
|
||||
if (m.platform === "github") {
|
||||
return { platform: "github", host: m.host, owner: m.owner, repo: m.repo, number: m.number };
|
||||
}
|
||||
return { platform: "gitlab", host: m.host, projectPath: m.projectPath, iid: m.iid };
|
||||
}
|
||||
|
||||
export function isSameProject(a: PRRef, b: PRRef): boolean {
|
||||
if (a.platform !== b.platform) return false;
|
||||
if (a.platform === "github" && b.platform === "github") {
|
||||
return a.host === b.host && a.owner === b.owner && a.repo === b.repo;
|
||||
}
|
||||
if (a.platform === "gitlab" && b.platform === "gitlab") {
|
||||
return a.host === b.host && a.projectPath === b.projectPath;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/** CLI tool name for the platform */
|
||||
export function getCliName(ref: PRRef): string {
|
||||
return ref.platform === "github" ? "gh" : "glab";
|
||||
}
|
||||
|
||||
/** Install URL for the platform CLI */
|
||||
export function getCliInstallUrl(ref: PRRef): string {
|
||||
return ref.platform === "github"
|
||||
? "https://cli.github.com"
|
||||
: "https://gitlab.com/gitlab-org/cli";
|
||||
}
|
||||
|
||||
/** Encode a file path for use in platform API URLs */
|
||||
export function encodeApiFilePath(filePath: string): string {
|
||||
return encodeURIComponent(filePath);
|
||||
}
|
||||
|
||||
// --- URL Parsing ---
|
||||
|
||||
/**
|
||||
* Parse a PR/MR URL into its components. Auto-detects platform.
|
||||
*
|
||||
* Handles:
|
||||
* - GitHub: https://github.com/owner/repo/pull/123[/files|/commits]
|
||||
* - GitHub Enterprise: https://ghe.company.com/owner/repo/pull/123
|
||||
* - GitLab: https://gitlab.com/group/subgroup/project/-/merge_requests/42[/diffs]
|
||||
* - Self-hosted GitLab: https://gitlab.mycompany.com/group/project/-/merge_requests/42
|
||||
*
|
||||
* GitLab is checked first because `/-/merge_requests/` is unambiguous,
|
||||
* while `/pull/` could theoretically appear on any host.
|
||||
*/
|
||||
export function parsePRUrl(url: string): PRRef | null {
|
||||
if (!url) return null;
|
||||
|
||||
// GitLab: https://{host}/{projectPath}/-/merge_requests/{iid}[/...]
|
||||
// Checked first — `/-/merge_requests/` is the most specific pattern.
|
||||
const glMatch = url.match(
|
||||
/^https?:\/\/([^/]+)\/(.+?)\/-\/merge_requests\/(\d+)/,
|
||||
);
|
||||
if (glMatch) {
|
||||
return {
|
||||
platform: "gitlab",
|
||||
host: glMatch[1],
|
||||
projectPath: glMatch[2],
|
||||
iid: parseInt(glMatch[3], 10),
|
||||
};
|
||||
}
|
||||
|
||||
// GitHub (including GHE): https://{host}/{owner}/{repo}/pull/{number}[/...]
|
||||
const ghMatch = url.match(
|
||||
/^https?:\/\/([^/]+)\/([^/]+)\/([^/]+)\/pull\/(\d+)/,
|
||||
);
|
||||
if (ghMatch) {
|
||||
return {
|
||||
platform: "github",
|
||||
host: ghMatch[1],
|
||||
owner: ghMatch[2],
|
||||
repo: ghMatch[3],
|
||||
number: parseInt(ghMatch[4], 10),
|
||||
};
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
// --- Dispatch Functions ---
|
||||
|
||||
export async function checkAuth(runtime: PRRuntime, ref: PRRef): Promise<void> {
|
||||
if (ref.platform === "github") return checkGhAuth(runtime, ref.host);
|
||||
return checkGlAuth(runtime, ref.host);
|
||||
}
|
||||
|
||||
export async function getUser(runtime: PRRuntime, ref: PRRef): Promise<string | null> {
|
||||
if (ref.platform === "github") return getGhUser(runtime, ref.host);
|
||||
return getGlUser(runtime, ref.host);
|
||||
}
|
||||
|
||||
export async function fetchPR(
|
||||
runtime: PRRuntime,
|
||||
ref: PRRef,
|
||||
): Promise<{ metadata: PRMetadata; rawPatch: string }> {
|
||||
if (ref.platform === "github") return fetchGhPR(runtime, ref);
|
||||
return fetchGlMR(runtime, ref);
|
||||
}
|
||||
|
||||
export async function fetchPRContext(
|
||||
runtime: PRRuntime,
|
||||
ref: PRRef,
|
||||
): Promise<PRContext> {
|
||||
if (ref.platform === "github") return fetchGhPRContext(runtime, ref);
|
||||
return fetchGlMRContext(runtime, ref);
|
||||
}
|
||||
|
||||
export async function fetchPRFileContent(
|
||||
runtime: PRRuntime,
|
||||
ref: PRRef,
|
||||
sha: string,
|
||||
filePath: string,
|
||||
): Promise<string | null> {
|
||||
if (ref.platform === "github") return fetchGhPRFileContent(runtime, ref, sha, filePath);
|
||||
return fetchGlFileContent(runtime, ref, sha, filePath);
|
||||
}
|
||||
|
||||
export async function submitPRReview(
|
||||
runtime: PRRuntime,
|
||||
ref: PRRef,
|
||||
headSha: string,
|
||||
action: "approve" | "comment",
|
||||
body: string,
|
||||
fileComments: PRReviewFileComment[],
|
||||
): Promise<void> {
|
||||
if (ref.platform === "github") return submitGhPRReview(runtime, ref, headSha, action, body, fileComments);
|
||||
return submitGlMRReview(runtime, ref, headSha, action, body, fileComments);
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch per-file "viewed" state for a PR.
|
||||
* GitHub: returns { filePath: isViewed } map.
|
||||
* GitLab: always returns {} (no server-side viewed state API).
|
||||
*/
|
||||
export async function fetchPRViewedFiles(
|
||||
runtime: PRRuntime,
|
||||
ref: PRRef,
|
||||
): Promise<Record<string, boolean>> {
|
||||
if (ref.platform === "github") return fetchGhPRViewedFiles(runtime, ref);
|
||||
return {}; // GitLab has no server-side viewed state
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark or unmark files as viewed in a PR.
|
||||
* GitHub: fires markFileAsViewed / unmarkFileAsViewed GraphQL mutations.
|
||||
* GitLab: no-op (no server-side viewed state API).
|
||||
*/
|
||||
export async function markPRFilesViewed(
|
||||
runtime: PRRuntime,
|
||||
ref: PRRef,
|
||||
prNodeId: string,
|
||||
filePaths: string[],
|
||||
viewed: boolean,
|
||||
): Promise<void> {
|
||||
if (ref.platform === "github") return markGhFilesViewed(runtime, ref, prNodeId, filePaths, viewed);
|
||||
// GitLab: no-op
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch the full stack tree for a stacked PR.
|
||||
* Walks up from the current PR to the default branch, resolving
|
||||
* PR numbers and titles for each intermediate branch.
|
||||
* Returns null if the PR is not stacked or the API call fails.
|
||||
*/
|
||||
export async function fetchPRStack(
|
||||
runtime: PRRuntime,
|
||||
ref: PRRef,
|
||||
metadata: PRMetadata,
|
||||
): Promise<PRStackTree | null> {
|
||||
if (ref.platform === "github") return fetchGhPRStack(runtime, ref, metadata);
|
||||
return null; // GitLab: not yet implemented
|
||||
}
|
||||
|
||||
export async function fetchPRList(
|
||||
runtime: PRRuntime,
|
||||
ref: PRRef,
|
||||
): Promise<PRListItem[]> {
|
||||
if (ref.platform === "github") return fetchGhPRList(runtime, ref);
|
||||
return []; // GitLab: not yet implemented
|
||||
}
|
||||
195
extensions/plannotator/generated/pr-stack.ts
Normal file
195
extensions/plannotator/generated/pr-stack.ts
Normal file
@@ -0,0 +1,195 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/pr-stack.ts
|
||||
import type { DiffResult, ReviewGitRuntime } from "./review-core";
|
||||
import type {
|
||||
PRDiffScopeOption,
|
||||
PRMetadata,
|
||||
PRStackInfo,
|
||||
PRStackTree,
|
||||
PRStackNode,
|
||||
} from "./pr-provider";
|
||||
export type { PRDiffScope, PRDiffScopeOption, PRStackInfo, PRStackTree, PRStackNode } from "./pr-provider";
|
||||
|
||||
function branchNameIsSafe(branch: string): boolean {
|
||||
return branch.trim().length > 0 && !branch.startsWith("-") && !branch.includes("\0");
|
||||
}
|
||||
|
||||
export function getPRStackInfo(metadata: PRMetadata | undefined): PRStackInfo | null {
|
||||
if (!metadata?.defaultBranch) return null;
|
||||
if (metadata.baseBranch === metadata.defaultBranch) return null;
|
||||
|
||||
return {
|
||||
isStacked: true,
|
||||
baseBranch: metadata.baseBranch,
|
||||
defaultBranch: metadata.defaultBranch,
|
||||
label: `${metadata.headBranch} stacked on ${metadata.baseBranch}`,
|
||||
source: "branch-inferred",
|
||||
};
|
||||
}
|
||||
|
||||
export function resolveStackInfo(
|
||||
metadata: PRMetadata,
|
||||
stackTree: PRStackTree | null,
|
||||
existing?: PRStackInfo | null,
|
||||
): PRStackInfo | null {
|
||||
if (existing) return existing;
|
||||
if (!stackTree || stackTree.nodes.filter(n => !n.isDefaultBranch).length <= 1) return null;
|
||||
return getPRStackInfo(metadata) ?? {
|
||||
isStacked: true,
|
||||
baseBranch: metadata.baseBranch,
|
||||
defaultBranch: metadata.defaultBranch!,
|
||||
label: `Root of stack — ${metadata.headBranch}`,
|
||||
source: "tree-discovered",
|
||||
};
|
||||
}
|
||||
|
||||
export function getPRDiffScopeOptions(
|
||||
metadata: PRMetadata | undefined,
|
||||
hasLocalCheckout: boolean,
|
||||
): PRDiffScopeOption[] {
|
||||
const stackInfo = getPRStackInfo(metadata);
|
||||
|
||||
return [
|
||||
{
|
||||
id: "layer",
|
||||
label: "Layer",
|
||||
description: metadata?.baseBranch
|
||||
? `Only changes relative to ${metadata.baseBranch}.`
|
||||
: "Only changes from this review.",
|
||||
enabled: true,
|
||||
},
|
||||
{
|
||||
id: "full-stack",
|
||||
label: "Full stack",
|
||||
description: stackInfo?.defaultBranch
|
||||
? `All changes from ${stackInfo.defaultBranch} to HEAD in the local checkout.`
|
||||
: "All changes from the default branch to HEAD in the local checkout.",
|
||||
enabled: Boolean(stackInfo && hasLocalCheckout),
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
export async function resolvePRFullStackBaseRef(
|
||||
runtime: ReviewGitRuntime,
|
||||
defaultBranch: string,
|
||||
cwd?: string,
|
||||
): Promise<string | null> {
|
||||
const remoteRef = `origin/${defaultBranch}`;
|
||||
const remote = await runtime.runGit(
|
||||
["show-ref", "--verify", "--quiet", `refs/remotes/${remoteRef}`],
|
||||
{ cwd },
|
||||
);
|
||||
if (remote.exitCode === 0) return remoteRef;
|
||||
|
||||
const local = await runtime.runGit(
|
||||
["show-ref", "--verify", "--quiet", `refs/heads/${defaultBranch}`],
|
||||
{ cwd },
|
||||
);
|
||||
if (local.exitCode === 0) return defaultBranch;
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
export async function runPRFullStackDiff(
|
||||
runtime: ReviewGitRuntime,
|
||||
metadata: PRMetadata,
|
||||
cwd?: string,
|
||||
): Promise<DiffResult> {
|
||||
const defaultBranch = metadata.defaultBranch;
|
||||
if (!defaultBranch || !branchNameIsSafe(defaultBranch)) {
|
||||
return {
|
||||
patch: "",
|
||||
label: "Full stack diff unavailable",
|
||||
error: "Could not determine a safe default branch for this review.",
|
||||
};
|
||||
}
|
||||
|
||||
const baseRef = await resolvePRFullStackBaseRef(runtime, defaultBranch, cwd);
|
||||
if (!baseRef) {
|
||||
return {
|
||||
patch: "",
|
||||
label: "Full stack diff unavailable",
|
||||
error: `Could not find origin/${defaultBranch} or local ${defaultBranch} in this checkout.`,
|
||||
};
|
||||
}
|
||||
|
||||
const diffArgs = [
|
||||
"diff",
|
||||
"--no-ext-diff",
|
||||
"--src-prefix=a/",
|
||||
"--dst-prefix=b/",
|
||||
"--end-of-options",
|
||||
`${baseRef}...HEAD`,
|
||||
];
|
||||
const diff = await runtime.runGit(diffArgs, { cwd });
|
||||
if (diff.exitCode !== 0) {
|
||||
const message = diff.stderr.trim() || `git ${diffArgs.join(" ")} failed`;
|
||||
return {
|
||||
patch: "",
|
||||
label: "Full stack diff unavailable",
|
||||
error: message.split("\n").find((line) => line.trim().length > 0) ?? message,
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
patch: diff.stdout,
|
||||
label: `Full stack diff vs ${baseRef}`,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch and checkout a PR/MR head in a local worktree.
|
||||
* Returns true if the checkout succeeded, false otherwise.
|
||||
*/
|
||||
export async function checkoutPRHead(
|
||||
runtime: ReviewGitRuntime,
|
||||
metadata: PRMetadata,
|
||||
cwd: string,
|
||||
): Promise<boolean> {
|
||||
const refSpec = metadata.platform === "github"
|
||||
? `refs/pull/${metadata.number}/head`
|
||||
: `refs/merge-requests/${metadata.iid}/head`;
|
||||
|
||||
const fetch = await runtime.runGit(["fetch", "origin", refSpec], { cwd });
|
||||
if (fetch.exitCode !== 0) return false;
|
||||
|
||||
const checkout = await runtime.runGit(["checkout", "FETCH_HEAD"], { cwd });
|
||||
return checkout.exitCode === 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a minimal stack tree from existing metadata (no API calls).
|
||||
* Used as a fallback when the full stack tree hasn't loaded yet.
|
||||
*/
|
||||
export function buildMinimalStackTree(
|
||||
metadata: PRMetadata,
|
||||
stackInfo: PRStackInfo,
|
||||
): PRStackTree {
|
||||
const nodes: PRStackNode[] = [];
|
||||
|
||||
if (stackInfo.defaultBranch) {
|
||||
nodes.push({
|
||||
branch: stackInfo.defaultBranch,
|
||||
isCurrent: false,
|
||||
isDefaultBranch: true,
|
||||
});
|
||||
}
|
||||
|
||||
if (stackInfo.baseBranch !== stackInfo.defaultBranch) {
|
||||
nodes.push({
|
||||
branch: stackInfo.baseBranch,
|
||||
isCurrent: false,
|
||||
isDefaultBranch: false,
|
||||
});
|
||||
}
|
||||
|
||||
nodes.push({
|
||||
branch: metadata.headBranch,
|
||||
number: metadata.platform === "github" ? metadata.number : metadata.iid,
|
||||
title: metadata.title,
|
||||
url: metadata.url,
|
||||
isCurrent: true,
|
||||
isDefaultBranch: false,
|
||||
});
|
||||
|
||||
return { nodes };
|
||||
}
|
||||
72
extensions/plannotator/generated/project.ts
Normal file
72
extensions/plannotator/generated/project.ts
Normal file
@@ -0,0 +1,72 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/project.ts
|
||||
/**
|
||||
* Project Utility — Pure Functions
|
||||
*
|
||||
* String sanitization and path extraction helpers.
|
||||
* Runtime-agnostic: no Bun or Node-specific APIs.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Sanitize a string for use as a tag
|
||||
* - lowercase
|
||||
* - replace spaces/underscores with hyphens
|
||||
* - remove special characters
|
||||
* - trim to reasonable length
|
||||
*/
|
||||
export function sanitizeTag(name: string): string | null {
|
||||
if (!name || typeof name !== "string") return null;
|
||||
|
||||
const sanitized = name
|
||||
.toLowerCase()
|
||||
.trim()
|
||||
.replace(/[\s_]+/g, "-") // spaces/underscores -> hyphens
|
||||
.replace(/[^a-z0-9-]/g, "") // remove special chars
|
||||
.replace(/-+/g, "-") // collapse multiple hyphens
|
||||
.replace(/^-|-$/g, "") // trim leading/trailing hyphens
|
||||
.slice(0, 30); // max 30 chars
|
||||
|
||||
return sanitized.length >= 2 ? sanitized : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract repo name from a git root path
|
||||
*/
|
||||
export function extractRepoName(gitRootPath: string): string | null {
|
||||
if (!gitRootPath || typeof gitRootPath !== "string") return null;
|
||||
|
||||
const trimmed = gitRootPath.trim().replace(/\/+$/, ""); // remove trailing slashes
|
||||
const parts = trimmed.split("/");
|
||||
const name = parts[parts.length - 1];
|
||||
|
||||
return sanitizeTag(name);
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract directory name from a path
|
||||
*/
|
||||
export function extractDirName(path: string): string | null {
|
||||
if (!path || typeof path !== "string") return null;
|
||||
|
||||
const trimmed = path.trim().replace(/\/+$/, "");
|
||||
if (trimmed === "" || trimmed === "/") return null;
|
||||
|
||||
const parts = trimmed.split("/");
|
||||
const name = parts[parts.length - 1];
|
||||
|
||||
// Skip generic names
|
||||
const skipNames = new Set(["home", "users", "user", "root", "tmp", "var"]);
|
||||
if (skipNames.has(name.toLowerCase())) return null;
|
||||
|
||||
return sanitizeTag(name);
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract hostname from a URL string, or return the original string on failure.
|
||||
*/
|
||||
export function hostnameOrFallback(url: string): string {
|
||||
try {
|
||||
return new URL(url).hostname;
|
||||
} catch {
|
||||
return url;
|
||||
}
|
||||
}
|
||||
245
extensions/plannotator/generated/prompts.ts
Normal file
245
extensions/plannotator/generated/prompts.ts
Normal file
@@ -0,0 +1,245 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/prompts.ts
|
||||
import { loadConfig, type PlannotatorConfig, type PromptRuntime } from "./config";
|
||||
|
||||
// ─── Template engine ─────────────────────────────────────────────────────────
|
||||
|
||||
export function resolveTemplate(
|
||||
template: string,
|
||||
vars: Record<string, string | undefined>,
|
||||
): string {
|
||||
return template.replace(/\{\{(\w+)\}\}/g, (match, key) => {
|
||||
const val = vars[key];
|
||||
return val !== undefined ? val : match;
|
||||
});
|
||||
}
|
||||
|
||||
// ─── Tool name map ───────────────────────────────────────────────────────────
|
||||
|
||||
export const PLAN_TOOL_NAMES: Record<PromptRuntime, string> = {
|
||||
"claude-code": "ExitPlanMode",
|
||||
opencode: "submit_plan",
|
||||
"copilot-cli": "exit_plan_mode",
|
||||
pi: "plannotator_submit_plan",
|
||||
codex: "ExitPlanMode",
|
||||
"gemini-cli": "exit_plan_mode",
|
||||
};
|
||||
|
||||
export function getPlanToolName(runtime?: PromptRuntime | null): string {
|
||||
return (runtime && PLAN_TOOL_NAMES[runtime]) || "ExitPlanMode";
|
||||
}
|
||||
|
||||
export function buildPlanFileRule(toolName: string, planFilePath?: string): string {
|
||||
if (!planFilePath) return "";
|
||||
return `- Your plan is saved at: ${planFilePath}\n You can edit this file to make targeted changes, then pass its path to ${toolName}.\n`;
|
||||
}
|
||||
|
||||
// ─── Default constants ───────────────────────────────────────────────────────
|
||||
|
||||
export const DEFAULT_REVIEW_APPROVED_PROMPT = "# Code Review\n\nCode review completed — no changes requested.";
|
||||
|
||||
export const DEFAULT_REVIEW_DENIED_SUFFIX = "\nThe reviewer has identified issues above. You must address all of them.";
|
||||
|
||||
export const DEFAULT_PLAN_DENIED_PROMPT =
|
||||
"YOUR PLAN WAS NOT APPROVED.\n\nYou MUST revise the plan to address ALL of the feedback below before calling {{toolName}} again.\n\nRules:\n{{planFileRule}}- Do not resubmit the same plan unchanged.\n- Do NOT change the plan title (first # heading) unless the user explicitly asks you to.\n\n{{feedback}}";
|
||||
|
||||
export const DEFAULT_PLAN_APPROVED_PROMPT =
|
||||
"Plan approved. You now have full tool access (read, bash, edit, write). Execute the plan in {{planFilePath}}. {{doneMsg}}";
|
||||
|
||||
export const DEFAULT_PLAN_APPROVED_WITH_NOTES_PROMPT =
|
||||
"Plan approved with notes! You now have full tool access (read, bash, edit, write). Execute the plan in {{planFilePath}}. {{doneMsg}}\n\n## Implementation Notes\n\nThe user approved your plan but added the following notes to consider during implementation:\n\n{{feedback}}\n\nProceed with implementation, incorporating these notes where applicable.";
|
||||
|
||||
export const DEFAULT_PLAN_AUTO_APPROVED_PROMPT =
|
||||
"Plan auto-approved (non-interactive mode). Execute the plan now.";
|
||||
|
||||
export const DEFAULT_ANNOTATE_FILE_FEEDBACK_PROMPT =
|
||||
"# Markdown Annotations\n\n{{fileHeader}}: {{filePath}}\n\n{{feedback}}\n\nPlease address the annotation feedback above.";
|
||||
|
||||
export const DEFAULT_ANNOTATE_MESSAGE_FEEDBACK_PROMPT =
|
||||
"# Message Annotations\n\n{{feedback}}\n\nPlease address the annotation feedback above.";
|
||||
|
||||
export const DEFAULT_ANNOTATE_APPROVED_PROMPT = "The user approved.";
|
||||
|
||||
// ─── Core resolver ───────────────────────────────────────────────────────────
|
||||
|
||||
type PromptSection = "review" | "plan" | "annotate";
|
||||
type PromptKey = "approved" | "approvedWithNotes" | "autoApproved" | "denied"
|
||||
| "fileFeedback" | "messageFeedback";
|
||||
|
||||
interface PromptLookupOptions {
|
||||
section: PromptSection;
|
||||
key: PromptKey;
|
||||
runtime?: PromptRuntime | null;
|
||||
config?: PlannotatorConfig;
|
||||
fallback: string;
|
||||
runtimeFallbacks?: Partial<Record<PromptRuntime, string>>;
|
||||
}
|
||||
|
||||
function normalizePrompt(prompt: unknown): string | undefined {
|
||||
if (typeof prompt !== "string") return undefined;
|
||||
return prompt.trim() ? prompt : undefined;
|
||||
}
|
||||
|
||||
export function getConfiguredPrompt(options: PromptLookupOptions): string {
|
||||
const resolvedConfig = options.config ?? loadConfig();
|
||||
const section = resolvedConfig.prompts?.[options.section];
|
||||
const runtimePrompt = options.runtime
|
||||
? normalizePrompt(section?.runtimes?.[options.runtime]?.[options.key])
|
||||
: undefined;
|
||||
const genericPrompt = normalizePrompt(section?.[options.key]);
|
||||
const runtimeFallback = options.runtime
|
||||
? options.runtimeFallbacks?.[options.runtime]
|
||||
: undefined;
|
||||
|
||||
return runtimePrompt ?? genericPrompt ?? runtimeFallback ?? options.fallback;
|
||||
}
|
||||
|
||||
type FeedbackVars = Record<string, string | undefined>;
|
||||
|
||||
// ─── Review wrappers ─────────────────────────────────────────────────────────
|
||||
|
||||
export function getReviewApprovedPrompt(
|
||||
runtime?: PromptRuntime | null,
|
||||
config?: PlannotatorConfig,
|
||||
): string {
|
||||
return getConfiguredPrompt({
|
||||
section: "review",
|
||||
key: "approved",
|
||||
runtime,
|
||||
config,
|
||||
fallback: DEFAULT_REVIEW_APPROVED_PROMPT,
|
||||
});
|
||||
}
|
||||
|
||||
const REVIEW_DENIED_RUNTIME_DEFAULTS: Partial<Record<PromptRuntime, string>> = {
|
||||
opencode: "\n\nPlease address this feedback.",
|
||||
pi: "\n\nPlease address this feedback.",
|
||||
};
|
||||
|
||||
export function getReviewDeniedSuffix(
|
||||
runtime?: PromptRuntime | null,
|
||||
config?: PlannotatorConfig,
|
||||
): string {
|
||||
return getConfiguredPrompt({
|
||||
section: "review",
|
||||
key: "denied",
|
||||
runtime,
|
||||
config,
|
||||
fallback: DEFAULT_REVIEW_DENIED_SUFFIX,
|
||||
runtimeFallbacks: REVIEW_DENIED_RUNTIME_DEFAULTS,
|
||||
});
|
||||
}
|
||||
|
||||
// ─── Plan wrappers ───────────────────────────────────────────────────────────
|
||||
|
||||
export function getPlanDeniedPrompt(
|
||||
runtime?: PromptRuntime | null,
|
||||
config?: PlannotatorConfig,
|
||||
vars?: FeedbackVars,
|
||||
): string {
|
||||
const template = getConfiguredPrompt({
|
||||
section: "plan",
|
||||
key: "denied",
|
||||
runtime,
|
||||
config,
|
||||
fallback: DEFAULT_PLAN_DENIED_PROMPT,
|
||||
});
|
||||
return resolveTemplate(template, vars ?? {});
|
||||
}
|
||||
|
||||
const PLAN_APPROVED_RUNTIME_DEFAULTS: Partial<Record<PromptRuntime, string>> = {
|
||||
opencode: "Plan approved!{{doneMsg}}",
|
||||
};
|
||||
|
||||
export function getPlanApprovedPrompt(
|
||||
runtime?: PromptRuntime | null,
|
||||
config?: PlannotatorConfig,
|
||||
vars?: FeedbackVars,
|
||||
): string {
|
||||
const template = getConfiguredPrompt({
|
||||
section: "plan",
|
||||
key: "approved",
|
||||
runtime,
|
||||
config,
|
||||
fallback: DEFAULT_PLAN_APPROVED_PROMPT,
|
||||
runtimeFallbacks: PLAN_APPROVED_RUNTIME_DEFAULTS,
|
||||
});
|
||||
return resolveTemplate(template, vars ?? {});
|
||||
}
|
||||
|
||||
const PLAN_APPROVED_WITH_NOTES_RUNTIME_DEFAULTS: Partial<Record<PromptRuntime, string>> = {
|
||||
opencode: "Plan approved with notes!\n{{doneMsg}}\n\n## Implementation Notes\n\nThe user approved your plan but added the following notes to consider during implementation:\n\n{{feedback}}{{proceedSuffix}}",
|
||||
};
|
||||
|
||||
export function getPlanApprovedWithNotesPrompt(
|
||||
runtime?: PromptRuntime | null,
|
||||
config?: PlannotatorConfig,
|
||||
vars?: FeedbackVars,
|
||||
): string {
|
||||
const template = getConfiguredPrompt({
|
||||
section: "plan",
|
||||
key: "approvedWithNotes",
|
||||
runtime,
|
||||
config,
|
||||
fallback: DEFAULT_PLAN_APPROVED_WITH_NOTES_PROMPT,
|
||||
runtimeFallbacks: PLAN_APPROVED_WITH_NOTES_RUNTIME_DEFAULTS,
|
||||
});
|
||||
return resolveTemplate(template, { proceedSuffix: "", ...vars });
|
||||
}
|
||||
|
||||
export function getPlanAutoApprovedPrompt(
|
||||
runtime?: PromptRuntime | null,
|
||||
config?: PlannotatorConfig,
|
||||
): string {
|
||||
return getConfiguredPrompt({
|
||||
section: "plan",
|
||||
key: "autoApproved",
|
||||
runtime,
|
||||
config,
|
||||
fallback: DEFAULT_PLAN_AUTO_APPROVED_PROMPT,
|
||||
});
|
||||
}
|
||||
|
||||
// ─── Annotate wrappers ──────────────────────────────────────────────────────
|
||||
|
||||
export function getAnnotateFileFeedbackPrompt(
|
||||
runtime?: PromptRuntime | null,
|
||||
config?: PlannotatorConfig,
|
||||
vars?: FeedbackVars,
|
||||
): string {
|
||||
const template = getConfiguredPrompt({
|
||||
section: "annotate",
|
||||
key: "fileFeedback",
|
||||
runtime,
|
||||
config,
|
||||
fallback: DEFAULT_ANNOTATE_FILE_FEEDBACK_PROMPT,
|
||||
});
|
||||
return resolveTemplate(template, vars ?? {});
|
||||
}
|
||||
|
||||
export function getAnnotateMessageFeedbackPrompt(
|
||||
runtime?: PromptRuntime | null,
|
||||
config?: PlannotatorConfig,
|
||||
vars?: FeedbackVars,
|
||||
): string {
|
||||
const template = getConfiguredPrompt({
|
||||
section: "annotate",
|
||||
key: "messageFeedback",
|
||||
runtime,
|
||||
config,
|
||||
fallback: DEFAULT_ANNOTATE_MESSAGE_FEEDBACK_PROMPT,
|
||||
});
|
||||
return resolveTemplate(template, vars ?? {});
|
||||
}
|
||||
|
||||
export function getAnnotateApprovedPrompt(
|
||||
runtime?: PromptRuntime | null,
|
||||
config?: PlannotatorConfig,
|
||||
): string {
|
||||
return getConfiguredPrompt({
|
||||
section: "annotate",
|
||||
key: "approved",
|
||||
runtime,
|
||||
config,
|
||||
fallback: DEFAULT_ANNOTATE_APPROVED_PROMPT,
|
||||
});
|
||||
}
|
||||
88
extensions/plannotator/generated/reference-common.ts
Normal file
88
extensions/plannotator/generated/reference-common.ts
Normal file
@@ -0,0 +1,88 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/reference-common.ts
|
||||
// --- Vault file tree helpers ---
|
||||
|
||||
export const FILE_BROWSER_EXCLUDED = [
|
||||
"node_modules/",
|
||||
".git/",
|
||||
"dist/",
|
||||
"build/",
|
||||
".next/",
|
||||
"__pycache__/",
|
||||
".obsidian/",
|
||||
".trash/",
|
||||
".venv/",
|
||||
"vendor/",
|
||||
"target/",
|
||||
".cache/",
|
||||
"coverage/",
|
||||
".turbo/",
|
||||
".svelte-kit/",
|
||||
".nuxt/",
|
||||
".output/",
|
||||
".parcel-cache/",
|
||||
".webpack/",
|
||||
".expo/",
|
||||
"_site/",
|
||||
"public/",
|
||||
".jekyll-cache/",
|
||||
"out/",
|
||||
".docusaurus/",
|
||||
"storybook-static/",
|
||||
];
|
||||
|
||||
export interface VaultNode {
|
||||
name: string;
|
||||
path: string; // relative path within vault
|
||||
type: "file" | "folder";
|
||||
children?: VaultNode[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a nested file tree from a sorted list of relative paths.
|
||||
* Folders are sorted before files at each level.
|
||||
*/
|
||||
export function buildFileTree(relativePaths: string[]): VaultNode[] {
|
||||
const root: VaultNode[] = [];
|
||||
|
||||
for (const filePath of relativePaths) {
|
||||
const parts = filePath.split("/");
|
||||
let current = root;
|
||||
let pathSoFar = "";
|
||||
|
||||
for (let i = 0; i < parts.length; i++) {
|
||||
const part = parts[i];
|
||||
pathSoFar = pathSoFar ? `${pathSoFar}/${part}` : part;
|
||||
const isFile = i === parts.length - 1;
|
||||
|
||||
let node = current.find(
|
||||
(n) => n.name === part && n.type === (isFile ? "file" : "folder"),
|
||||
);
|
||||
if (!node) {
|
||||
node = {
|
||||
name: part,
|
||||
path: pathSoFar,
|
||||
type: isFile ? "file" : "folder",
|
||||
};
|
||||
if (!isFile) node.children = [];
|
||||
current.push(node);
|
||||
}
|
||||
if (!isFile) {
|
||||
current = node.children!;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Sort: folders first (alphabetical), then files (alphabetical)
|
||||
const sortNodes = (nodes: VaultNode[]) => {
|
||||
nodes.sort((a, b) => {
|
||||
if (a.type !== b.type) return a.type === "folder" ? -1 : 1;
|
||||
return a.name.localeCompare(b.name);
|
||||
});
|
||||
for (const node of nodes) {
|
||||
if (node.children) sortNodes(node.children);
|
||||
}
|
||||
};
|
||||
sortNodes(root);
|
||||
|
||||
return root;
|
||||
}
|
||||
72
extensions/plannotator/generated/repo.ts
Normal file
72
extensions/plannotator/generated/repo.ts
Normal file
@@ -0,0 +1,72 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/repo.ts
|
||||
export interface RepoInfo {
|
||||
/** Display string (e.g., "backnotprop/plannotator" or "my-project") */
|
||||
display: string;
|
||||
/** Current git branch (if in a git repo) */
|
||||
branch?: string;
|
||||
/** Host of the git remote (e.g., "github.com", "gitlab.com"). Populated */
|
||||
/** only when the remote URL is parseable; absent for directory-only fallbacks. */
|
||||
host?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse org/repo from a git remote URL
|
||||
*
|
||||
* Handles:
|
||||
* - SSH: git@github.com:org/repo.git
|
||||
* - HTTPS: https://github.com/org/repo.git
|
||||
* - SSH with port: ssh://git@github.com:22/org/repo.git
|
||||
* - GitLab subgroups: git@gitlab.com:group/subgroup/project.git
|
||||
*/
|
||||
export function parseRemoteUrl(url: string): string | null {
|
||||
if (!url) return null;
|
||||
|
||||
// SSH with port: ssh://git@host:22/path.git — strip scheme+host+port
|
||||
const sshPortMatch = url.match(/^ssh:\/\/[^/]+(?::\d+)?\/(.+?)(?:\.git)?$/);
|
||||
if (sshPortMatch) return sshPortMatch[1];
|
||||
|
||||
// SSH format: git@host:path.git — capture full path after ':'
|
||||
// Reject URLs with :// scheme (HTTPS with non-standard ports like :8443)
|
||||
if (!url.includes("://")) {
|
||||
const sshMatch = url.match(/:([^/][^:]*?)(?:\.git)?$/);
|
||||
if (sshMatch) return sshMatch[1];
|
||||
}
|
||||
|
||||
// HTTPS format: https://host/path.git — capture full path after host
|
||||
const httpsMatch = url.match(/^https?:\/\/[^/]+\/(.+?)(?:\.git)?$/);
|
||||
if (httpsMatch) return httpsMatch[1];
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse the host from a git remote URL. Returns null when the shape
|
||||
* doesn't match a known remote form. Used to identify the forge
|
||||
* (github.com, gitlab.com, self-hosted) so inline mention / issue
|
||||
* refs can link to the correct destination instead of assuming GitHub.
|
||||
*/
|
||||
export function parseRemoteHost(url: string): string | null {
|
||||
if (!url) return null;
|
||||
// ssh://git@host:port/path
|
||||
const sshPort = url.match(/^ssh:\/\/(?:[^@]+@)?([^:/]+)/i);
|
||||
if (sshPort) return sshPort[1];
|
||||
// git@host:path
|
||||
if (!url.includes('://')) {
|
||||
const ssh = url.match(/^[^@\s]+@([^:\s]+):/);
|
||||
if (ssh) return ssh[1];
|
||||
}
|
||||
// https://host/path or http://host/path
|
||||
const https = url.match(/^https?:\/\/([^/:]+)/i);
|
||||
if (https) return https[1];
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get directory name from path
|
||||
*/
|
||||
export function getDirName(path: string): string | null {
|
||||
if (!path) return null;
|
||||
const trimmed = path.trim().replace(/\/+$/, "");
|
||||
const parts = trimmed.split("/");
|
||||
return parts[parts.length - 1] || null;
|
||||
}
|
||||
510
extensions/plannotator/generated/resolve-file.ts
Normal file
510
extensions/plannotator/generated/resolve-file.ts
Normal file
@@ -0,0 +1,510 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/resolve-file.ts
|
||||
/**
|
||||
* Smart markdown file resolution.
|
||||
*
|
||||
* Resolves a user-provided path to an absolute file path using three strategies:
|
||||
* 1. Exact path (absolute or relative to cwd)
|
||||
* 2. Case-insensitive relative path search within project root
|
||||
* 3. Case-insensitive bare filename search within project root
|
||||
*
|
||||
* Used by both the CLI (`plannotator annotate`) and the `/api/doc` endpoint.
|
||||
*/
|
||||
|
||||
import { homedir } from "os";
|
||||
import { isAbsolute, join, resolve, win32 } from "path";
|
||||
import { existsSync, readdirSync, type Dirent } from "fs";
|
||||
|
||||
const MARKDOWN_PATH_REGEX = /\.mdx?$/i;
|
||||
|
||||
import { CODE_FILE_REGEX as CODE_FILE_BASENAME_REGEX } from "./code-file";
|
||||
export { CODE_FILE_REGEX, isCodeFilePath } from "./code-file";
|
||||
|
||||
const WINDOWS_DRIVE_PATH_PATTERNS = [
|
||||
/^\/cygdrive\/([a-zA-Z])\/(.+)$/,
|
||||
/^\/([a-zA-Z])\/(.+)$/,
|
||||
];
|
||||
|
||||
const IGNORED_DIRS = [
|
||||
"node_modules/",
|
||||
".git/",
|
||||
"dist/",
|
||||
"build/",
|
||||
".next/",
|
||||
"__pycache__/",
|
||||
".obsidian/",
|
||||
".trash/",
|
||||
];
|
||||
|
||||
const CODE_IGNORED_DIRS = [
|
||||
...IGNORED_DIRS,
|
||||
".turbo/",
|
||||
".cache/",
|
||||
"target/",
|
||||
"vendor/",
|
||||
"coverage/",
|
||||
".venv/",
|
||||
".pytest_cache/",
|
||||
];
|
||||
|
||||
export type ResolveResult =
|
||||
| { kind: "found"; path: string }
|
||||
| { kind: "not_found"; input: string }
|
||||
| { kind: "ambiguous"; input: string; matches: string[] }
|
||||
| { kind: "unavailable"; input: string };
|
||||
|
||||
function normalizeSeparators(input: string): string {
|
||||
return input.replace(/\\/g, "/");
|
||||
}
|
||||
|
||||
function stripTrailingSlashes(input: string): string {
|
||||
return input.replace(/\/+$/, "");
|
||||
}
|
||||
|
||||
export function expandHomePath(input: string, home = homedir()): string {
|
||||
if (input === "~") {
|
||||
return home;
|
||||
}
|
||||
|
||||
if (input.startsWith("~/") || input.startsWith("~\\")) {
|
||||
return join(home, input.slice(2));
|
||||
}
|
||||
|
||||
return input;
|
||||
}
|
||||
|
||||
export function stripWrappingQuotes(input: string): string {
|
||||
if (input.length < 2) {
|
||||
return input;
|
||||
}
|
||||
|
||||
const first = input[0];
|
||||
const last = input[input.length - 1];
|
||||
if ((first === '"' && last === '"') || (first === "'" && last === "'")) {
|
||||
return input.slice(1, -1);
|
||||
}
|
||||
|
||||
return input;
|
||||
}
|
||||
|
||||
export function normalizeUserPathInput(
|
||||
input: string,
|
||||
platform = process.platform,
|
||||
): string {
|
||||
const trimmedInput = input.trim();
|
||||
const unquotedInput = stripWrappingQuotes(trimmedInput);
|
||||
const expandedInput = expandHomePath(unquotedInput);
|
||||
|
||||
if (platform !== "win32") {
|
||||
return expandedInput;
|
||||
}
|
||||
|
||||
for (const pattern of WINDOWS_DRIVE_PATH_PATTERNS) {
|
||||
const match = expandedInput.match(pattern);
|
||||
if (!match) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const [, driveLetter, rest] = match;
|
||||
return `${driveLetter.toUpperCase()}:/${rest}`;
|
||||
}
|
||||
|
||||
return expandedInput;
|
||||
}
|
||||
|
||||
function isAbsoluteNormalizedUserPath(
|
||||
input: string,
|
||||
platform = process.platform,
|
||||
): boolean {
|
||||
if (hasWindowsDriveLetter(input)) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return platform === "win32"
|
||||
? win32.isAbsolute(input)
|
||||
: isAbsolute(input);
|
||||
}
|
||||
|
||||
export function isAbsoluteUserPath(
|
||||
input: string,
|
||||
platform = process.platform,
|
||||
): boolean {
|
||||
return isAbsoluteNormalizedUserPath(normalizeUserPathInput(input, platform), platform);
|
||||
}
|
||||
|
||||
export function resolveUserPath(
|
||||
input: string,
|
||||
baseDir = process.cwd(),
|
||||
platform = process.platform,
|
||||
): string {
|
||||
const normalizedInput = normalizeUserPathInput(input, platform);
|
||||
if (!normalizedInput) {
|
||||
return "";
|
||||
}
|
||||
return isAbsoluteNormalizedUserPath(normalizedInput, platform)
|
||||
? resolveAbsolutePath(normalizedInput, platform)
|
||||
: resolve(baseDir, normalizedInput);
|
||||
}
|
||||
|
||||
function normalizeComparablePath(input: string): string {
|
||||
return stripTrailingSlashes(normalizeSeparators(resolveUserPath(input)));
|
||||
}
|
||||
|
||||
export function isWithinProjectRoot(candidate: string, projectRoot: string): boolean {
|
||||
const normalizedCandidate = normalizeComparablePath(candidate);
|
||||
const normalizedProjectRoot = normalizeComparablePath(projectRoot);
|
||||
return (
|
||||
normalizedCandidate === normalizedProjectRoot ||
|
||||
normalizedCandidate.startsWith(`${normalizedProjectRoot}/`)
|
||||
);
|
||||
}
|
||||
|
||||
function getLowercaseBasename(input: string): string {
|
||||
const normalizedInput = normalizeSeparators(input);
|
||||
return normalizedInput.split("/").pop()!.toLowerCase();
|
||||
}
|
||||
|
||||
function getLookupKey(input: string, isBareFilename: boolean): string {
|
||||
return isBareFilename ? getLowercaseBasename(input) : input.toLowerCase();
|
||||
}
|
||||
|
||||
function resolveAbsolutePath(
|
||||
input: string,
|
||||
platform = process.platform,
|
||||
): string {
|
||||
// Use win32.resolve for Windows paths regardless of reported platform
|
||||
return platform === "win32" || hasWindowsDriveLetter(input)
|
||||
? win32.resolve(input)
|
||||
: resolve(input);
|
||||
}
|
||||
|
||||
function isSearchableMarkdownPath(input: string): boolean {
|
||||
return MARKDOWN_PATH_REGEX.test(input.trim());
|
||||
}
|
||||
|
||||
/** Check if a path looks like a Windows absolute path (e.g. C:\ or C:/) */
|
||||
function hasWindowsDriveLetter(input: string): boolean {
|
||||
return /^[a-zA-Z]:[/\\]/.test(input);
|
||||
}
|
||||
|
||||
/** Cross-platform file existence check using Node fs (more reliable than Bun.file in compiled exes) */
|
||||
function fileExists(filePath: string): boolean {
|
||||
try {
|
||||
return existsSync(filePath);
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/** Recursively walk a directory collecting files matching `fileMatcher`, skipping ignored dirs. */
|
||||
function walkFiles(
|
||||
dir: string,
|
||||
root: string,
|
||||
results: string[],
|
||||
ignoredDirs: string[],
|
||||
fileMatcher: (name: string) => boolean,
|
||||
): void {
|
||||
const entries = readdirSync(dir, { withFileTypes: true }) as Dirent[];
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory()) {
|
||||
if (ignoredDirs.some((d) => d === entry.name + "/")) continue;
|
||||
try {
|
||||
walkFiles(join(dir, entry.name), root, results, ignoredDirs, fileMatcher);
|
||||
} catch {
|
||||
/* skip dirs we can't read */
|
||||
}
|
||||
} else if (entry.isFile() && fileMatcher(entry.name)) {
|
||||
const relative = join(dir, entry.name)
|
||||
.slice(root.length + 1)
|
||||
.replace(/\\/g, "/");
|
||||
results.push(relative);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function walkMarkdownFiles(dir: string, root: string, results: string[], ignoredDirs: string[]): void {
|
||||
try {
|
||||
walkFiles(dir, root, results, ignoredDirs, (name) => /\.mdx?$/i.test(name));
|
||||
} catch {
|
||||
/* fail soft for markdown — preserves existing behavior */
|
||||
}
|
||||
}
|
||||
|
||||
// --- Code-file resolution (async, cached) ---
|
||||
|
||||
const FILE_LIST_CACHE_TTL_MS = 30_000;
|
||||
const fileListCache = new Map<
|
||||
string,
|
||||
{ promise: Promise<string[] | null>; startedAt: number }
|
||||
>();
|
||||
|
||||
function fileListCacheKey(projectRoot: string, kind: string): string {
|
||||
return `${projectRoot}|${kind}`;
|
||||
}
|
||||
|
||||
function startCodeWalk(projectRoot: string): Promise<string[] | null> {
|
||||
return Promise.resolve().then(() => {
|
||||
try {
|
||||
const results: string[] = [];
|
||||
walkFiles(projectRoot, projectRoot, results, CODE_IGNORED_DIRS, (name) =>
|
||||
CODE_FILE_BASENAME_REGEX.test(name),
|
||||
);
|
||||
return results;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Trigger (or return the in-flight) walk of `projectRoot` for code files.
|
||||
* Cached for `FILE_LIST_CACHE_TTL_MS`. Storing a Promise (not a value) makes
|
||||
* concurrent callers piggyback on the same walk — first arrival wins.
|
||||
*
|
||||
* Returns `null` (wrapped in Promise) when the walk fails (perms, etc).
|
||||
*/
|
||||
export function warmFileListCache(
|
||||
projectRoot: string,
|
||||
kind: "code",
|
||||
): Promise<string[] | null> {
|
||||
const key = fileListCacheKey(projectRoot, kind);
|
||||
const entry = fileListCache.get(key);
|
||||
if (entry && Date.now() - entry.startedAt < FILE_LIST_CACHE_TTL_MS) {
|
||||
return entry.promise;
|
||||
}
|
||||
const promise = startCodeWalk(projectRoot);
|
||||
fileListCache.set(key, { promise, startedAt: Date.now() });
|
||||
return promise;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve a code-file path within a project root.
|
||||
*
|
||||
* Strategies:
|
||||
* 1. Absolute path → use as-is.
|
||||
* 2. Exact relative from project root.
|
||||
* 3. If `baseDir` provided, literal `<baseDir>/<input>` existence check —
|
||||
* lets out-of-tree linked docs resolve their own relative references
|
||||
* (e.g. `../script.ts` in `~/notes/foo.md` finds `~/script.ts`).
|
||||
* 4. Case-insensitive suffix match against the cached file list:
|
||||
* - bare basename input → match any file with that basename;
|
||||
* - input with `/` → match files whose path equals or ends with `/<input>`
|
||||
* on a segment boundary (so `editor/App.tsx` matches `packages/editor/App.tsx`
|
||||
* but not `myeditor/App.tsx`).
|
||||
*
|
||||
* `..` segments in the input are honored: only `./` is stripped before suffix
|
||||
* matching. `../foo.ts` without a `baseDir` correctly falls through to
|
||||
* not_found rather than fabricating a match against `foo.ts` somewhere in cwd.
|
||||
*/
|
||||
export async function resolveCodeFile(
|
||||
input: string,
|
||||
projectRoot: string,
|
||||
baseDir?: string,
|
||||
): Promise<ResolveResult> {
|
||||
const originalInput = input.trim();
|
||||
const unquotedInput = stripWrappingQuotes(originalInput);
|
||||
const normalizedInput = normalizeUserPathInput(unquotedInput);
|
||||
const searchInput = normalizeSeparators(normalizedInput);
|
||||
|
||||
if (!searchInput) {
|
||||
return { kind: "not_found", input: originalInput };
|
||||
}
|
||||
|
||||
if (isAbsoluteNormalizedUserPath(normalizedInput)) {
|
||||
const absolutePath = resolveAbsolutePath(normalizedInput);
|
||||
if (fileExists(absolutePath)) {
|
||||
return { kind: "found", path: absolutePath };
|
||||
}
|
||||
return { kind: "not_found", input: originalInput };
|
||||
}
|
||||
|
||||
const fromRoot = resolve(projectRoot, searchInput);
|
||||
if (isWithinProjectRoot(fromRoot, projectRoot) && fileExists(fromRoot)) {
|
||||
return { kind: "found", path: fromRoot };
|
||||
}
|
||||
|
||||
if (baseDir) {
|
||||
const fromBase = resolve(baseDir, searchInput);
|
||||
if (fileExists(fromBase)) {
|
||||
return { kind: "found", path: fromBase };
|
||||
}
|
||||
}
|
||||
|
||||
const fileList = await warmFileListCache(projectRoot, "code");
|
||||
if (fileList === null) {
|
||||
return { kind: "unavailable", input: originalInput };
|
||||
}
|
||||
|
||||
// Strip leading `./` so suffix matching works on inputs like
|
||||
// `./editor/App.tsx` — file list entries never carry that segment.
|
||||
// `../` is intentionally NOT stripped: `..` is meaningful (escape parent),
|
||||
// not noise. If we can't honor it via baseDir, the input has no
|
||||
// suffix-match equivalent in the in-tree file list.
|
||||
const cleanedInput = searchInput.replace(/^(?:\.\/)+/, "");
|
||||
if (!cleanedInput || cleanedInput.startsWith("../")) {
|
||||
return { kind: "not_found", input: originalInput };
|
||||
}
|
||||
const target = cleanedInput.toLowerCase();
|
||||
const isBareFilename = !cleanedInput.includes("/");
|
||||
const matches: string[] = [];
|
||||
|
||||
for (const f of fileList) {
|
||||
const fl = f.toLowerCase();
|
||||
if (isBareFilename) {
|
||||
const base = fl.split("/").pop();
|
||||
if (base === target) matches.push(resolve(projectRoot, f));
|
||||
} else {
|
||||
if (fl === target || fl.endsWith("/" + target)) {
|
||||
matches.push(resolve(projectRoot, f));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (matches.length === 1) {
|
||||
return { kind: "found", path: matches[0] };
|
||||
}
|
||||
if (matches.length > 1) {
|
||||
return { kind: "ambiguous", input: originalInput, matches };
|
||||
}
|
||||
return { kind: "not_found", input: originalInput };
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve a markdown file path within a project root.
|
||||
*
|
||||
* @param input - User-provided path (absolute, relative, or bare filename)
|
||||
* @param projectRoot - Project root directory to search within
|
||||
*/
|
||||
function resolveMarkdownFileCore(
|
||||
input: string,
|
||||
projectRoot: string,
|
||||
): ResolveResult {
|
||||
const normalizedInput = normalizeUserPathInput(input);
|
||||
const searchInput = normalizeSeparators(normalizedInput);
|
||||
const isBareFilename = !searchInput.includes("/");
|
||||
const targetLookupKey = getLookupKey(searchInput, isBareFilename);
|
||||
|
||||
// Restrict to markdown files
|
||||
if (!isSearchableMarkdownPath(normalizedInput)) {
|
||||
return { kind: "not_found", input };
|
||||
}
|
||||
|
||||
// 1. Absolute path — use as-is (no project root restriction;
|
||||
// the user explicitly typed the full path)
|
||||
if (isAbsoluteNormalizedUserPath(normalizedInput)) {
|
||||
const absolutePath = resolveAbsolutePath(normalizedInput);
|
||||
if (fileExists(absolutePath)) {
|
||||
return { kind: "found", path: absolutePath };
|
||||
}
|
||||
return { kind: "not_found", input };
|
||||
}
|
||||
|
||||
// 2. Exact relative path from project root
|
||||
const fromRoot = resolve(projectRoot, searchInput);
|
||||
if (isWithinProjectRoot(fromRoot, projectRoot) && fileExists(fromRoot)) {
|
||||
return { kind: "found", path: fromRoot };
|
||||
}
|
||||
|
||||
// 3. Case-insensitive search (only scan markdown files)
|
||||
const allFiles: string[] = [];
|
||||
walkMarkdownFiles(projectRoot, projectRoot, allFiles, IGNORED_DIRS);
|
||||
const matches: string[] = [];
|
||||
|
||||
for (const match of allFiles) {
|
||||
const normalizedMatch = normalizeSeparators(match);
|
||||
const matchLookupKey = getLookupKey(normalizedMatch, isBareFilename);
|
||||
|
||||
if (matchLookupKey === targetLookupKey) {
|
||||
const full = resolve(projectRoot, normalizedMatch);
|
||||
if (isWithinProjectRoot(full, projectRoot)) {
|
||||
matches.push(full);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (matches.length === 1) {
|
||||
return { kind: "found", path: matches[0] };
|
||||
}
|
||||
if (matches.length > 1) {
|
||||
const projectRootPrefix = `${normalizeComparablePath(projectRoot)}/`;
|
||||
const relative = matches.map((match) =>
|
||||
normalizeComparablePath(match).replace(projectRootPrefix, ""),
|
||||
);
|
||||
return { kind: "ambiguous", input, matches: relative };
|
||||
}
|
||||
|
||||
return { kind: "not_found", input };
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve a markdown file path within a project root.
|
||||
*
|
||||
* @param input - User-provided path (absolute, relative, or bare filename)
|
||||
* @param projectRoot - Project root directory to search within
|
||||
*/
|
||||
export function resolveMarkdownFile(
|
||||
input: string,
|
||||
projectRoot: string,
|
||||
): ResolveResult {
|
||||
const originalInput = input.trim();
|
||||
const unquotedInput = stripWrappingQuotes(originalInput);
|
||||
|
||||
const primary = resolveMarkdownFileCore(unquotedInput, projectRoot);
|
||||
if (primary.kind === "found") {
|
||||
return primary;
|
||||
}
|
||||
if (primary.kind === "ambiguous") {
|
||||
return { ...primary, input: originalInput };
|
||||
}
|
||||
|
||||
if (!unquotedInput.startsWith("@")) {
|
||||
return { kind: "not_found", input: originalInput };
|
||||
}
|
||||
|
||||
const normalizedInput = unquotedInput.replace(/^@+/, "");
|
||||
if (!normalizedInput) {
|
||||
return { kind: "not_found", input: originalInput };
|
||||
}
|
||||
|
||||
const fallback = resolveMarkdownFileCore(normalizedInput, projectRoot);
|
||||
if (fallback.kind === "found") {
|
||||
return fallback;
|
||||
}
|
||||
if (fallback.kind === "ambiguous") {
|
||||
return { ...fallback, input: originalInput };
|
||||
}
|
||||
|
||||
return { kind: "not_found", input: originalInput };
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a directory contains at least one file matching the given extensions.
|
||||
* Used to validate folder annotation targets.
|
||||
*
|
||||
* @param dirPath - Directory to search
|
||||
* @param excludedDirs - Directory names to skip (with trailing slash, e.g. "node_modules/")
|
||||
* @param extensions - Regex to match file extensions (default: markdown only)
|
||||
*/
|
||||
export function hasMarkdownFiles(
|
||||
dirPath: string,
|
||||
excludedDirs: string[] = IGNORED_DIRS,
|
||||
extensions: RegExp = /\.mdx?$/i,
|
||||
): boolean {
|
||||
function walk(dir: string): boolean {
|
||||
let entries;
|
||||
try {
|
||||
entries = readdirSync(dir, { withFileTypes: true });
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory()) {
|
||||
if (excludedDirs.some((d) => d === entry.name + "/")) continue;
|
||||
if (walk(join(dir, entry.name))) return true;
|
||||
} else if (entry.isFile() && extensions.test(entry.name)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
return walk(dirPath);
|
||||
}
|
||||
748
extensions/plannotator/generated/review-core.ts
Normal file
748
extensions/plannotator/generated/review-core.ts
Normal file
@@ -0,0 +1,748 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/review-core.ts
|
||||
/**
|
||||
* Runtime-agnostic code-review core shared by Bun runtimes and Pi.
|
||||
*
|
||||
* Pi consumes a build-time copy of this file so its published package stays
|
||||
* self-contained while review diff logic remains sourced from one module.
|
||||
*/
|
||||
|
||||
import { resolve as resolvePath } from "node:path";
|
||||
|
||||
export type DiffType =
|
||||
| "uncommitted"
|
||||
| "staged"
|
||||
| "unstaged"
|
||||
| "last-commit"
|
||||
| "branch"
|
||||
| "merge-base"
|
||||
| "all"
|
||||
| `worktree:${string}`
|
||||
| "p4-default"
|
||||
| `p4-changelist:${string}`;
|
||||
|
||||
export interface DiffOption {
|
||||
id: string;
|
||||
label: string;
|
||||
}
|
||||
|
||||
export interface WorktreeInfo {
|
||||
path: string;
|
||||
branch: string | null;
|
||||
head: string;
|
||||
}
|
||||
|
||||
export interface AvailableBranches {
|
||||
local: string[];
|
||||
remote: string[];
|
||||
}
|
||||
|
||||
export interface GitContext {
|
||||
currentBranch: string;
|
||||
defaultBranch: string;
|
||||
diffOptions: DiffOption[];
|
||||
worktrees: WorktreeInfo[];
|
||||
availableBranches: AvailableBranches;
|
||||
cwd?: string;
|
||||
vcsType?: "git" | "p4";
|
||||
}
|
||||
|
||||
export interface DiffResult {
|
||||
patch: string;
|
||||
label: string;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
export interface GitCommandResult {
|
||||
stdout: string;
|
||||
stderr: string;
|
||||
exitCode: number;
|
||||
}
|
||||
|
||||
export interface ReviewGitRuntime {
|
||||
runGit: (
|
||||
args: string[],
|
||||
options?: { cwd?: string; timeoutMs?: number },
|
||||
) => Promise<GitCommandResult>;
|
||||
readTextFile: (path: string) => Promise<string | null>;
|
||||
}
|
||||
|
||||
export interface GitDiffOptions {
|
||||
hideWhitespace?: boolean;
|
||||
}
|
||||
|
||||
export async function getCurrentBranch(
|
||||
runtime: ReviewGitRuntime,
|
||||
cwd?: string,
|
||||
): Promise<string> {
|
||||
const result = await runtime.runGit(
|
||||
["rev-parse", "--abbrev-ref", "HEAD"],
|
||||
{ cwd },
|
||||
);
|
||||
return result.exitCode === 0 ? result.stdout.trim() || "HEAD" : "HEAD";
|
||||
}
|
||||
|
||||
export async function getDefaultBranch(
|
||||
runtime: ReviewGitRuntime,
|
||||
cwd?: string,
|
||||
): Promise<string> {
|
||||
// Prefer the remote tracking ref (e.g. `origin/main`) so diffs run against
|
||||
// the upstream tip, not a potentially stale local copy. Only fall back to
|
||||
// a local ref when there's no remote configured at all.
|
||||
const remoteHead = await runtime.runGit(
|
||||
["symbolic-ref", "refs/remotes/origin/HEAD"],
|
||||
{ cwd },
|
||||
);
|
||||
if (remoteHead.exitCode === 0) {
|
||||
const ref = remoteHead.stdout.trim();
|
||||
if (ref) {
|
||||
// `symbolic-ref` only tells us what origin/HEAD *points at* — it does
|
||||
// not guarantee that the target ref was actually fetched. In narrow
|
||||
// or partial clones the pointer can be set while the target is
|
||||
// missing, in which case a later `git diff origin/main..HEAD` would
|
||||
// error. Verify the target exists before trusting it.
|
||||
const verify = await runtime.runGit(
|
||||
["show-ref", "--verify", "--quiet", ref],
|
||||
{ cwd },
|
||||
);
|
||||
if (verify.exitCode === 0) return ref.replace("refs/remotes/", "");
|
||||
}
|
||||
}
|
||||
|
||||
const mainBranch = await runtime.runGit(
|
||||
["show-ref", "--verify", "refs/heads/main"],
|
||||
{ cwd },
|
||||
);
|
||||
if (mainBranch.exitCode === 0) return "main";
|
||||
|
||||
return "master";
|
||||
}
|
||||
|
||||
/**
|
||||
* Query the remote for its default branch via `ls-remote --symref`. Returns
|
||||
* `origin/<name>` if the remote answers and the tracking ref exists locally,
|
||||
* otherwise `null`. Designed to run in the background at server startup — the
|
||||
* caller fires it with `.then()` and uses the result if/when it arrives.
|
||||
*
|
||||
* Timeout-guarded: if the network is slow or absent, the promise resolves
|
||||
* (with `null`) once the timeout fires. Never throws.
|
||||
*/
|
||||
export async function detectRemoteDefaultBranch(
|
||||
runtime: ReviewGitRuntime,
|
||||
cwd?: string,
|
||||
): Promise<string | null> {
|
||||
try {
|
||||
const lsRemote = await runtime.runGit(
|
||||
["ls-remote", "--symref", "origin", "HEAD"],
|
||||
{ cwd, timeoutMs: 5000 },
|
||||
);
|
||||
if (lsRemote.exitCode !== 0) return null;
|
||||
const match = lsRemote.stdout.match(/^ref:\s+refs\/heads\/(\S+)\s+HEAD/m);
|
||||
if (!match) return null;
|
||||
const remoteBranch = `origin/${match[1]}`;
|
||||
const refExists = await runtime.runGit(
|
||||
["show-ref", "--verify", "--quiet", `refs/remotes/${remoteBranch}`],
|
||||
{ cwd },
|
||||
);
|
||||
return refExists.exitCode === 0 ? remoteBranch : null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
export async function listBranches(
|
||||
runtime: ReviewGitRuntime,
|
||||
cwd?: string,
|
||||
): Promise<AvailableBranches> {
|
||||
// Emit `<full-refname>\t<short-name>` so we can classify by ref prefix
|
||||
// without guessing from the short form — local branches can contain `/`
|
||||
// (e.g. `feature/foo`), so `name.includes("/")` would misclassify them.
|
||||
const result = await runtime.runGit(
|
||||
[
|
||||
"for-each-ref",
|
||||
"--format=%(refname)\t%(refname:short)",
|
||||
"refs/heads",
|
||||
"refs/remotes",
|
||||
],
|
||||
{ cwd },
|
||||
);
|
||||
if (result.exitCode !== 0) return { local: [], remote: [] };
|
||||
|
||||
const local: string[] = [];
|
||||
const remote: string[] = [];
|
||||
|
||||
for (const line of result.stdout.split("\n")) {
|
||||
const [fullRef, shortName] = line.split("\t");
|
||||
if (!fullRef || !shortName) continue;
|
||||
if (shortName.endsWith("/HEAD")) continue;
|
||||
if (fullRef.startsWith("refs/heads/")) {
|
||||
local.push(shortName);
|
||||
} else if (fullRef.startsWith("refs/remotes/")) {
|
||||
remote.push(shortName);
|
||||
}
|
||||
}
|
||||
|
||||
// Keep both local and remote refs — they can point to different commits
|
||||
// (stale local tracking branches are common) and users need to be able to
|
||||
// pick either explicitly. The picker groups them separately for clarity.
|
||||
local.sort();
|
||||
remote.sort();
|
||||
|
||||
return { local, remote };
|
||||
}
|
||||
|
||||
/**
|
||||
* Pick a safe base branch. Trusts the caller verbatim if they supplied one,
|
||||
* otherwise falls back to the detected default. Shared by Bun (`review.ts`)
|
||||
* and Pi (`serverReview.ts`) so both runtimes behave identically.
|
||||
*
|
||||
* Why trust the caller: the UI picker only ever sends refs from the known
|
||||
* list, and external/programmatic callers may pass tags, SHAs, or refs under
|
||||
* non-`origin` remotes that we must not silently rewrite (a tag `release` is
|
||||
* not the same commit as a branch `origin/release`). Invalid refs surface as
|
||||
* git errors on the next diff call, which is better than silently producing
|
||||
* a patch against the wrong commit.
|
||||
*/
|
||||
export function resolveBaseBranch(
|
||||
requested: string | undefined,
|
||||
detected: string,
|
||||
): string {
|
||||
return requested || detected;
|
||||
}
|
||||
|
||||
export async function getWorktrees(
|
||||
runtime: ReviewGitRuntime,
|
||||
cwd?: string,
|
||||
): Promise<WorktreeInfo[]> {
|
||||
const result = await runtime.runGit(["worktree", "list", "--porcelain"], { cwd });
|
||||
if (result.exitCode !== 0) return [];
|
||||
|
||||
const entries: WorktreeInfo[] = [];
|
||||
let current: Partial<WorktreeInfo> = {};
|
||||
|
||||
for (const line of result.stdout.split("\n")) {
|
||||
if (line.startsWith("worktree ")) {
|
||||
if (current.path) {
|
||||
entries.push({
|
||||
path: current.path,
|
||||
head: current.head || "",
|
||||
branch: current.branch ?? null,
|
||||
});
|
||||
}
|
||||
current = { path: line.slice("worktree ".length) };
|
||||
} else if (line.startsWith("HEAD ")) {
|
||||
current.head = line.slice("HEAD ".length);
|
||||
} else if (line.startsWith("branch ")) {
|
||||
current.branch = line
|
||||
.slice("branch ".length)
|
||||
.replace("refs/heads/", "");
|
||||
} else if (line === "detached") {
|
||||
current.branch = null;
|
||||
}
|
||||
}
|
||||
|
||||
if (current.path) {
|
||||
entries.push({
|
||||
path: current.path,
|
||||
head: current.head || "",
|
||||
branch: current.branch ?? null,
|
||||
});
|
||||
}
|
||||
|
||||
return entries;
|
||||
}
|
||||
|
||||
export async function getGitContext(
|
||||
runtime: ReviewGitRuntime,
|
||||
cwd?: string,
|
||||
): Promise<GitContext> {
|
||||
const [currentBranch, defaultBranch, availableBranches] = await Promise.all([
|
||||
getCurrentBranch(runtime, cwd),
|
||||
getDefaultBranch(runtime, cwd),
|
||||
listBranches(runtime, cwd),
|
||||
]);
|
||||
|
||||
const diffOptions: DiffOption[] = [
|
||||
{ id: "uncommitted", label: "Uncommitted changes" },
|
||||
{ id: "staged", label: "Staged changes" },
|
||||
{ id: "unstaged", label: "Unstaged changes" },
|
||||
{ id: "last-commit", label: "Last commit" },
|
||||
];
|
||||
|
||||
// Always offer Branch diff / PR Diff when a default branch exists. The
|
||||
// older guard hid them when the reviewer was on the default branch (the
|
||||
// `vs <default>` diff from the default branch itself is always empty), but
|
||||
// the base picker now lets reviewers compare against any branch from any
|
||||
// branch, so there's no meaningless-by-construction option. Also: preserving
|
||||
// diff mode across worktree switches and Pi's `initialBase` can land the
|
||||
// reviewer on the default branch with branch/merge-base already active — the
|
||||
// old guard hid the active mode's option, trapping them. Unconditional
|
||||
// emission keeps the active option reachable in every flow.
|
||||
if (defaultBranch) {
|
||||
diffOptions.push({ id: "merge-base", label: "Committed changes" });
|
||||
}
|
||||
|
||||
diffOptions.push({ id: "all", label: "All files (HEAD)" });
|
||||
|
||||
const [worktrees, currentTreePathResult] = await Promise.all([
|
||||
getWorktrees(runtime, cwd),
|
||||
runtime.runGit(["rev-parse", "--show-toplevel"], { cwd }),
|
||||
]);
|
||||
|
||||
const currentTreePath =
|
||||
currentTreePathResult.exitCode === 0
|
||||
? currentTreePathResult.stdout.trim()
|
||||
: null;
|
||||
|
||||
return {
|
||||
currentBranch,
|
||||
defaultBranch,
|
||||
diffOptions,
|
||||
worktrees: worktrees.filter((wt) => wt.path !== currentTreePath),
|
||||
availableBranches,
|
||||
cwd,
|
||||
};
|
||||
}
|
||||
|
||||
async function getUntrackedFileDiffs(
|
||||
runtime: ReviewGitRuntime,
|
||||
srcPrefix = "a/",
|
||||
dstPrefix = "b/",
|
||||
cwd?: string,
|
||||
options?: GitDiffOptions,
|
||||
): Promise<string> {
|
||||
// git ls-files scopes to the CWD subtree and returns CWD-relative paths,
|
||||
// unlike git diff HEAD which always covers the full repo with root-relative
|
||||
// paths. Resolve the repo root so untracked files from the entire repo are
|
||||
// included and their paths match the tracked-diff output.
|
||||
const toplevelResult = await runtime.runGit(
|
||||
["rev-parse", "--show-toplevel"],
|
||||
{ cwd },
|
||||
);
|
||||
const rootCwd =
|
||||
toplevelResult.exitCode === 0 ? toplevelResult.stdout.trim() : cwd;
|
||||
|
||||
const lsResult = await runtime.runGit(
|
||||
["ls-files", "--others", "--exclude-standard"],
|
||||
{ cwd: rootCwd },
|
||||
);
|
||||
if (lsResult.exitCode !== 0) return "";
|
||||
|
||||
const files = lsResult.stdout
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter((file) => file.length > 0);
|
||||
|
||||
if (files.length === 0) return "";
|
||||
|
||||
const diffs = await Promise.all(
|
||||
files.map(async (file) => {
|
||||
const diffResult = await runtime.runGit(
|
||||
[
|
||||
"diff",
|
||||
"--no-ext-diff",
|
||||
...(options?.hideWhitespace ? ["-w"] : []),
|
||||
"--no-index",
|
||||
`--src-prefix=${srcPrefix}`,
|
||||
`--dst-prefix=${dstPrefix}`,
|
||||
"/dev/null",
|
||||
file,
|
||||
],
|
||||
{ cwd: rootCwd },
|
||||
);
|
||||
return diffResult.stdout;
|
||||
}),
|
||||
);
|
||||
|
||||
return diffs.join("");
|
||||
}
|
||||
|
||||
function assertGitSuccess(
|
||||
result: GitCommandResult,
|
||||
args: string[],
|
||||
): GitCommandResult {
|
||||
if (result.exitCode === 0) return result;
|
||||
|
||||
const command = `git ${args.join(" ")}`;
|
||||
const stderr = result.stderr.trim();
|
||||
throw new Error(
|
||||
stderr
|
||||
? `${command} failed: ${stderr}`
|
||||
: `${command} failed with exit code ${result.exitCode}`,
|
||||
);
|
||||
}
|
||||
|
||||
const WORKTREE_SUB_TYPES = new Set([
|
||||
"uncommitted",
|
||||
"staged",
|
||||
"unstaged",
|
||||
"last-commit",
|
||||
"branch",
|
||||
"merge-base",
|
||||
"all",
|
||||
]);
|
||||
|
||||
export function parseWorktreeDiffType(
|
||||
diffType: string,
|
||||
): { path: string; subType: string } | null {
|
||||
if (!diffType.startsWith("worktree:")) return null;
|
||||
|
||||
const rest = diffType.slice("worktree:".length);
|
||||
const lastColon = rest.lastIndexOf(":");
|
||||
if (lastColon !== -1) {
|
||||
const maybeSub = rest.slice(lastColon + 1);
|
||||
if (WORKTREE_SUB_TYPES.has(maybeSub)) {
|
||||
return { path: rest.slice(0, lastColon), subType: maybeSub };
|
||||
}
|
||||
}
|
||||
|
||||
return { path: rest, subType: "uncommitted" };
|
||||
}
|
||||
|
||||
export async function runGitDiff(
|
||||
runtime: ReviewGitRuntime,
|
||||
diffType: DiffType,
|
||||
defaultBranch: string = "main",
|
||||
externalCwd?: string,
|
||||
options?: GitDiffOptions,
|
||||
): Promise<DiffResult> {
|
||||
let patch = "";
|
||||
let label = "";
|
||||
let cwd: string | undefined = externalCwd;
|
||||
let effectiveDiffType = diffType as string;
|
||||
|
||||
if (diffType.startsWith("worktree:")) {
|
||||
const parsed = parseWorktreeDiffType(diffType);
|
||||
if (!parsed) {
|
||||
return {
|
||||
patch: "",
|
||||
label: "Worktree error",
|
||||
error: "Could not parse worktree diff type",
|
||||
};
|
||||
}
|
||||
cwd = parsed.path;
|
||||
effectiveDiffType = parsed.subType;
|
||||
}
|
||||
|
||||
const wFlag = options?.hideWhitespace ? ["-w"] : [];
|
||||
|
||||
try {
|
||||
switch (effectiveDiffType) {
|
||||
case "uncommitted": {
|
||||
const trackedDiffArgs = [
|
||||
"diff",
|
||||
"--no-ext-diff",
|
||||
...wFlag,
|
||||
"HEAD",
|
||||
"--src-prefix=a/",
|
||||
"--dst-prefix=b/",
|
||||
];
|
||||
const hasHead =
|
||||
(await runtime.runGit(["rev-parse", "--verify", "HEAD"], { cwd }))
|
||||
.exitCode === 0;
|
||||
const trackedPatch = hasHead
|
||||
? assertGitSuccess(
|
||||
await runtime.runGit(trackedDiffArgs, { cwd }),
|
||||
trackedDiffArgs,
|
||||
).stdout
|
||||
: "";
|
||||
const untrackedDiff = await getUntrackedFileDiffs(
|
||||
runtime,
|
||||
"a/",
|
||||
"b/",
|
||||
cwd,
|
||||
options,
|
||||
);
|
||||
patch = trackedPatch + untrackedDiff;
|
||||
label = "Uncommitted changes";
|
||||
break;
|
||||
}
|
||||
|
||||
case "staged": {
|
||||
const stagedDiffArgs = [
|
||||
"diff",
|
||||
"--no-ext-diff",
|
||||
...wFlag,
|
||||
"--staged",
|
||||
"--src-prefix=a/",
|
||||
"--dst-prefix=b/",
|
||||
];
|
||||
const stagedDiff = assertGitSuccess(
|
||||
await runtime.runGit(stagedDiffArgs, { cwd }),
|
||||
stagedDiffArgs,
|
||||
);
|
||||
patch = stagedDiff.stdout;
|
||||
label = "Staged changes";
|
||||
break;
|
||||
}
|
||||
|
||||
case "unstaged": {
|
||||
const trackedDiffArgs = [
|
||||
"diff",
|
||||
"--no-ext-diff",
|
||||
...wFlag,
|
||||
"--src-prefix=a/",
|
||||
"--dst-prefix=b/",
|
||||
];
|
||||
const trackedDiff = assertGitSuccess(
|
||||
await runtime.runGit(trackedDiffArgs, { cwd }),
|
||||
trackedDiffArgs,
|
||||
);
|
||||
const untrackedDiff = await getUntrackedFileDiffs(
|
||||
runtime,
|
||||
"a/",
|
||||
"b/",
|
||||
cwd,
|
||||
options,
|
||||
);
|
||||
patch = trackedDiff.stdout + untrackedDiff;
|
||||
label = "Unstaged changes";
|
||||
break;
|
||||
}
|
||||
|
||||
case "last-commit": {
|
||||
const hasParent = await runtime.runGit(
|
||||
["rev-parse", "--verify", "HEAD~1"],
|
||||
{ cwd },
|
||||
);
|
||||
const args =
|
||||
hasParent.exitCode === 0
|
||||
? ["diff", "--no-ext-diff", ...wFlag, "HEAD~1..HEAD", "--src-prefix=a/", "--dst-prefix=b/"]
|
||||
: ["diff", "--no-ext-diff", ...wFlag, "--root", "HEAD", "--src-prefix=a/", "--dst-prefix=b/"];
|
||||
const lastCommitDiff = assertGitSuccess(
|
||||
await runtime.runGit(args, { cwd }),
|
||||
args,
|
||||
);
|
||||
patch = lastCommitDiff.stdout;
|
||||
label = "Last commit";
|
||||
break;
|
||||
}
|
||||
|
||||
case "branch": {
|
||||
// `--end-of-options` hardens against a caller-supplied `defaultBranch`
|
||||
// that starts with `-` being parsed as a git flag (e.g. `--output=...`
|
||||
// would redirect diff output to an attacker-chosen path). Same pattern
|
||||
// applied wherever user-controlled refs flow into a git argv.
|
||||
const branchDiffArgs = [
|
||||
"diff",
|
||||
"--no-ext-diff",
|
||||
...wFlag,
|
||||
"--src-prefix=a/",
|
||||
"--dst-prefix=b/",
|
||||
"--end-of-options",
|
||||
`${defaultBranch}..HEAD`,
|
||||
];
|
||||
const branchDiff = assertGitSuccess(
|
||||
await runtime.runGit(branchDiffArgs, { cwd }),
|
||||
branchDiffArgs,
|
||||
);
|
||||
patch = branchDiff.stdout;
|
||||
label = `Changes vs ${defaultBranch}`;
|
||||
break;
|
||||
}
|
||||
|
||||
case "merge-base": {
|
||||
const mergeBaseLookupArgs = ["merge-base", "--end-of-options", defaultBranch, "HEAD"];
|
||||
const mergeBaseResult = assertGitSuccess(
|
||||
await runtime.runGit(mergeBaseLookupArgs, { cwd }),
|
||||
mergeBaseLookupArgs,
|
||||
);
|
||||
const mergeBase = mergeBaseResult.stdout.trim();
|
||||
const mergeBaseDiffArgs = [
|
||||
"diff",
|
||||
"--no-ext-diff",
|
||||
...wFlag,
|
||||
"--src-prefix=a/",
|
||||
"--dst-prefix=b/",
|
||||
"--end-of-options",
|
||||
`${mergeBase}..HEAD`,
|
||||
];
|
||||
const mergeBaseDiff = assertGitSuccess(
|
||||
await runtime.runGit(mergeBaseDiffArgs, { cwd }),
|
||||
mergeBaseDiffArgs,
|
||||
);
|
||||
patch = mergeBaseDiff.stdout;
|
||||
label = `PR diff vs ${defaultBranch}`;
|
||||
break;
|
||||
}
|
||||
|
||||
case "all": {
|
||||
// Diff from the empty tree to HEAD — shows every tracked file as an addition.
|
||||
const emptyTreeResult = await runtime.runGit(["hash-object", "-t", "tree", "/dev/null"], { cwd });
|
||||
const emptyTree = emptyTreeResult.exitCode === 0
|
||||
? emptyTreeResult.stdout.trim()
|
||||
: "4b825dc642cb6eb9a060e54bf8d69288fbee4904";
|
||||
const allDiffArgs = [
|
||||
"diff",
|
||||
"--no-ext-diff",
|
||||
...wFlag,
|
||||
"--src-prefix=a/",
|
||||
"--dst-prefix=b/",
|
||||
"--end-of-options",
|
||||
`${emptyTree}..HEAD`,
|
||||
];
|
||||
const allDiff = assertGitSuccess(
|
||||
await runtime.runGit(allDiffArgs, { cwd }),
|
||||
allDiffArgs,
|
||||
);
|
||||
patch = allDiff.stdout;
|
||||
label = "All files";
|
||||
break;
|
||||
}
|
||||
|
||||
default:
|
||||
return { patch: "", label: "Unknown diff type" };
|
||||
}
|
||||
} catch (error) {
|
||||
const raw = error instanceof Error ? error.message : String(error);
|
||||
// Git dumps its entire --help output on some failures; keep only the
|
||||
// first meaningful line so the UI doesn't vomit a wall of text.
|
||||
const firstLine = raw.split("\n").find((l) => l.trim().length > 0) ?? raw;
|
||||
const message = firstLine.length > 200 ? firstLine.slice(0, 200) + "…" : firstLine;
|
||||
return {
|
||||
patch: "",
|
||||
label: cwd ? "Worktree error" : `Error: ${diffType}`,
|
||||
error: message,
|
||||
};
|
||||
}
|
||||
|
||||
if (cwd) {
|
||||
const branch = await getCurrentBranch(runtime, cwd);
|
||||
label =
|
||||
branch && branch !== "HEAD"
|
||||
? `${branch}: ${label}`
|
||||
: `${cwd.split("/").pop()}: ${label}`;
|
||||
}
|
||||
|
||||
return { patch, label };
|
||||
}
|
||||
|
||||
export async function runGitDiffWithContext(
|
||||
runtime: ReviewGitRuntime,
|
||||
diffType: DiffType,
|
||||
gitContext: GitContext,
|
||||
options?: GitDiffOptions,
|
||||
): Promise<DiffResult> {
|
||||
return runGitDiff(runtime, diffType, gitContext.defaultBranch, gitContext.cwd, options);
|
||||
}
|
||||
|
||||
export async function getFileContentsForDiff(
|
||||
runtime: ReviewGitRuntime,
|
||||
diffType: DiffType,
|
||||
defaultBranch: string,
|
||||
filePath: string,
|
||||
oldPath?: string,
|
||||
cwd?: string,
|
||||
): Promise<{ oldContent: string | null; newContent: string | null }> {
|
||||
const oldFilePath = oldPath || filePath;
|
||||
|
||||
let effectiveDiffType = diffType as string;
|
||||
if (diffType.startsWith("worktree:")) {
|
||||
const parsed = parseWorktreeDiffType(diffType);
|
||||
if (!parsed) return { oldContent: null, newContent: null };
|
||||
cwd = parsed.path;
|
||||
effectiveDiffType = parsed.subType;
|
||||
}
|
||||
|
||||
async function gitShow(ref: string, path: string): Promise<string | null> {
|
||||
// `--end-of-options` hardens against user-supplied refs starting with `-`.
|
||||
const result = await runtime.runGit(["show", "--end-of-options", `${ref}:${path}`], { cwd });
|
||||
return result.exitCode === 0 ? result.stdout : null;
|
||||
}
|
||||
|
||||
async function readWorkingTree(path: string): Promise<string | null> {
|
||||
const fullPath = cwd ? resolvePath(cwd, path) : path;
|
||||
return runtime.readTextFile(fullPath);
|
||||
}
|
||||
|
||||
switch (effectiveDiffType) {
|
||||
case "uncommitted":
|
||||
return {
|
||||
oldContent: await gitShow("HEAD", oldFilePath),
|
||||
newContent: await readWorkingTree(filePath),
|
||||
};
|
||||
case "staged":
|
||||
return {
|
||||
oldContent: await gitShow("HEAD", oldFilePath),
|
||||
newContent: await gitShow(":0", filePath),
|
||||
};
|
||||
case "unstaged":
|
||||
return {
|
||||
oldContent: await gitShow(":0", oldFilePath),
|
||||
newContent: await readWorkingTree(filePath),
|
||||
};
|
||||
case "last-commit":
|
||||
return {
|
||||
oldContent: await gitShow("HEAD~1", oldFilePath),
|
||||
newContent: await gitShow("HEAD", filePath),
|
||||
};
|
||||
case "branch":
|
||||
return {
|
||||
oldContent: await gitShow(defaultBranch, oldFilePath),
|
||||
newContent: await gitShow("HEAD", filePath),
|
||||
};
|
||||
case "merge-base": {
|
||||
const mbResult = await runtime.runGit(["merge-base", "--end-of-options", defaultBranch, "HEAD"], { cwd });
|
||||
const mb = mbResult.exitCode === 0 ? mbResult.stdout.trim() : defaultBranch;
|
||||
return {
|
||||
oldContent: await gitShow(mb, oldFilePath),
|
||||
newContent: await gitShow("HEAD", filePath),
|
||||
};
|
||||
}
|
||||
case "all":
|
||||
return {
|
||||
oldContent: null,
|
||||
newContent: await gitShow("HEAD", filePath),
|
||||
};
|
||||
default:
|
||||
return { oldContent: null, newContent: null };
|
||||
}
|
||||
}
|
||||
|
||||
export function validateFilePath(filePath: string): void {
|
||||
if (filePath.includes("..") || filePath.startsWith("/")) {
|
||||
throw new Error("Invalid file path");
|
||||
}
|
||||
}
|
||||
|
||||
async function ensureGitSuccess(
|
||||
runtime: ReviewGitRuntime,
|
||||
args: string[],
|
||||
cwd?: string,
|
||||
): Promise<void> {
|
||||
const result = await runtime.runGit(args, { cwd });
|
||||
if (result.exitCode !== 0) {
|
||||
throw new Error(result.stderr.trim() || `git ${args.join(" ")} failed`);
|
||||
}
|
||||
}
|
||||
|
||||
export async function gitAddFile(
|
||||
runtime: ReviewGitRuntime,
|
||||
filePath: string,
|
||||
cwd?: string,
|
||||
): Promise<void> {
|
||||
validateFilePath(filePath);
|
||||
await ensureGitSuccess(runtime, ["add", "--", filePath], cwd);
|
||||
}
|
||||
|
||||
export async function gitResetFile(
|
||||
runtime: ReviewGitRuntime,
|
||||
filePath: string,
|
||||
cwd?: string,
|
||||
): Promise<void> {
|
||||
validateFilePath(filePath);
|
||||
await ensureGitSuccess(runtime, ["reset", "HEAD", "--", filePath], cwd);
|
||||
}
|
||||
|
||||
export function parseP4DiffType(
|
||||
diffType: string,
|
||||
): { changelist: string | "default" } | null {
|
||||
if (diffType === "p4-default") return { changelist: "default" };
|
||||
if (diffType.startsWith("p4-changelist:")) {
|
||||
return { changelist: diffType.slice("p4-changelist:".length) };
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
export function isP4DiffType(diffType: string): boolean {
|
||||
return parseP4DiffType(diffType) !== null;
|
||||
}
|
||||
377
extensions/plannotator/generated/storage.ts
Normal file
377
extensions/plannotator/generated/storage.ts
Normal file
@@ -0,0 +1,377 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/storage.ts
|
||||
/**
|
||||
* Plan Storage Utility
|
||||
*
|
||||
* Saves plans and annotations to ~/.plannotator/plans/
|
||||
* Cross-platform: works on Windows, macOS, and Linux.
|
||||
*
|
||||
* Runtime-agnostic: uses only node:fs, node:path, node:os.
|
||||
*/
|
||||
|
||||
import { homedir } from "os";
|
||||
import { join, resolve, sep } from "path";
|
||||
import { mkdirSync, writeFileSync, readFileSync, readdirSync, statSync, existsSync } from "fs";
|
||||
import { sanitizeTag } from "./project";
|
||||
import { resolveUserPath } from "./resolve-file";
|
||||
|
||||
/**
|
||||
* Get the plan storage directory, creating it if needed.
|
||||
* Cross-platform: uses os.homedir() for Windows/macOS/Linux compatibility.
|
||||
* @param customPath Optional custom path. Supports ~ for home directory.
|
||||
*/
|
||||
export function getPlanDir(customPath?: string | null): string {
|
||||
let planDir: string;
|
||||
|
||||
if (customPath?.trim()) {
|
||||
planDir = resolveUserPath(customPath);
|
||||
} else {
|
||||
planDir = join(homedir(), ".plannotator", "plans");
|
||||
}
|
||||
|
||||
mkdirSync(planDir, { recursive: true });
|
||||
return planDir;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract the first heading from markdown content.
|
||||
*/
|
||||
function extractFirstHeading(markdown: string): string | null {
|
||||
const match = markdown.match(/^#\s+(.+)$/m);
|
||||
if (!match) return null;
|
||||
return match[1].trim();
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a slug from plan content.
|
||||
* Format: {sanitized-heading}-YYYY-MM-DD
|
||||
*/
|
||||
export function generateSlug(plan: string): string {
|
||||
const date = new Date().toISOString().split("T")[0]; // YYYY-MM-DD
|
||||
|
||||
const heading = extractFirstHeading(plan);
|
||||
const slug = heading ? sanitizeTag(heading) : null;
|
||||
|
||||
return slug ? `${slug}-${date}` : `plan-${date}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Save the plan markdown to disk.
|
||||
* Returns the full path to the saved file.
|
||||
*/
|
||||
export function savePlan(slug: string, content: string, customPath?: string | null): string {
|
||||
const planDir = getPlanDir(customPath);
|
||||
const filePath = join(planDir, `${slug}.md`);
|
||||
writeFileSync(filePath, content, "utf-8");
|
||||
return filePath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Save annotations to disk.
|
||||
* Returns the full path to the saved file.
|
||||
*/
|
||||
export function saveAnnotations(slug: string, annotationsContent: string, customPath?: string | null): string {
|
||||
const planDir = getPlanDir(customPath);
|
||||
const filePath = join(planDir, `${slug}.annotations.md`);
|
||||
writeFileSync(filePath, annotationsContent, "utf-8");
|
||||
return filePath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Save the final snapshot on approve/deny.
|
||||
* Combines plan and annotations into a single file with status suffix.
|
||||
* Returns the full path to the saved file.
|
||||
*/
|
||||
export function saveFinalSnapshot(
|
||||
slug: string,
|
||||
status: "approved" | "denied",
|
||||
plan: string,
|
||||
annotations: string,
|
||||
customPath?: string | null
|
||||
): string {
|
||||
const planDir = getPlanDir(customPath);
|
||||
const filePath = join(planDir, `${slug}-${status}.md`);
|
||||
|
||||
// Combine plan with annotations appended
|
||||
let content = plan;
|
||||
if (annotations && annotations !== "No changes detected.") {
|
||||
content += "\n\n---\n\n" + annotations;
|
||||
}
|
||||
|
||||
writeFileSync(filePath, content, "utf-8");
|
||||
return filePath;
|
||||
}
|
||||
|
||||
// --- Plan Archive ---
|
||||
|
||||
export interface ArchivedPlan {
|
||||
filename: string;
|
||||
title: string;
|
||||
date: string;
|
||||
timestamp: string; // ISO string from file mtime
|
||||
status: "approved" | "denied" | "unknown";
|
||||
size: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse an archive filename into metadata.
|
||||
* Handles both old (DATE-heading-status.md) and new (heading-DATE-status.md) formats.
|
||||
*/
|
||||
export function parseArchiveFilename(filename: string): ArchivedPlan | null {
|
||||
// Skip non-decision files
|
||||
if (filename.endsWith(".annotations.md") || filename.endsWith(".diff.md")) return null;
|
||||
|
||||
const base = filename.replace(/\.md$/, "");
|
||||
|
||||
// Extract status suffix
|
||||
let status: ArchivedPlan["status"] = "unknown";
|
||||
let slug = base;
|
||||
if (base.endsWith("-approved")) {
|
||||
status = "approved";
|
||||
slug = base.slice(0, -"-approved".length);
|
||||
} else if (base.endsWith("-denied")) {
|
||||
status = "denied";
|
||||
slug = base.slice(0, -"-denied".length);
|
||||
} else {
|
||||
// Skip plain files (no decision status)
|
||||
return null;
|
||||
}
|
||||
|
||||
// Extract date (YYYY-MM-DD) — could be anywhere in the slug
|
||||
const dateMatch = slug.match(/(\d{4}-\d{2}-\d{2})/);
|
||||
const date = dateMatch ? dateMatch[1] : "";
|
||||
|
||||
// Title: remove date, convert hyphens to spaces, trim
|
||||
const title = slug
|
||||
.replace(/\d{4}-\d{2}-\d{2}/, "")
|
||||
.replace(/^-+|-+$/g, "")
|
||||
.replace(/-+/g, " ")
|
||||
.trim() || "Untitled Plan";
|
||||
|
||||
return { filename, title, date, timestamp: "", status, size: 0 };
|
||||
}
|
||||
|
||||
/**
|
||||
* List all archived plans (approved/denied decision snapshots).
|
||||
* Returns plans sorted by date descending.
|
||||
*/
|
||||
export function listArchivedPlans(customPath?: string | null): ArchivedPlan[] {
|
||||
const planDir = getPlanDir(customPath);
|
||||
try {
|
||||
const entries = readdirSync(planDir);
|
||||
const plans: ArchivedPlan[] = [];
|
||||
for (const entry of entries) {
|
||||
if (!entry.endsWith(".md")) continue;
|
||||
const parsed = parseArchiveFilename(entry);
|
||||
if (!parsed) continue;
|
||||
try {
|
||||
const stat = statSync(join(planDir, entry));
|
||||
parsed.size = stat.size;
|
||||
parsed.timestamp = stat.mtime.toISOString();
|
||||
} catch { /* keep defaults */ }
|
||||
plans.push(parsed);
|
||||
}
|
||||
return plans.sort((a, b) => b.date.localeCompare(a.date) || b.timestamp.localeCompare(a.timestamp));
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Read an archived plan file by filename.
|
||||
* Returns null if the file doesn't exist or on read error.
|
||||
*/
|
||||
export function readArchivedPlan(filename: string, customPath?: string | null): string | null {
|
||||
const planDir = getPlanDir(customPath);
|
||||
const filePath = resolve(planDir, filename);
|
||||
// Guard against path traversal (resolve + trailing separator, matching reference-handlers.ts)
|
||||
if (!filePath.startsWith(planDir + sep)) return null;
|
||||
try {
|
||||
return readFileSync(filePath, "utf-8");
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// --- Version History ---
|
||||
|
||||
/**
|
||||
* Get the history directory for a project/slug combination, creating it if needed.
|
||||
* History is always stored in ~/.plannotator/history/{project}/{slug}/.
|
||||
* Not affected by the customPath setting (that only affects decision saves).
|
||||
*/
|
||||
export function getHistoryDir(project: string, slug: string): string {
|
||||
const historyDir = join(homedir(), ".plannotator", "history", project, slug);
|
||||
mkdirSync(historyDir, { recursive: true });
|
||||
return historyDir;
|
||||
}
|
||||
|
||||
/**
|
||||
* Determine the next version number by scanning existing files.
|
||||
* Returns 1 if no versions exist, otherwise max + 1.
|
||||
*/
|
||||
function getNextVersionNumber(historyDir: string): number {
|
||||
try {
|
||||
const entries = readdirSync(historyDir);
|
||||
let max = 0;
|
||||
for (const entry of entries) {
|
||||
const match = entry.match(/^(\d+)\.md$/);
|
||||
if (match) {
|
||||
const num = parseInt(match[1], 10);
|
||||
if (num > max) max = num;
|
||||
}
|
||||
}
|
||||
return max + 1;
|
||||
} catch {
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Save a plan version to the history directory.
|
||||
* Deduplication: if the latest version has identical content, skip saving.
|
||||
* Returns the version number, file path, and whether a new file was created.
|
||||
*/
|
||||
export function saveToHistory(
|
||||
project: string,
|
||||
slug: string,
|
||||
plan: string
|
||||
): { version: number; path: string; isNew: boolean } {
|
||||
const historyDir = getHistoryDir(project, slug);
|
||||
const nextVersion = getNextVersionNumber(historyDir);
|
||||
|
||||
// Deduplicate: check if latest version has identical content
|
||||
if (nextVersion > 1) {
|
||||
const latestPath = join(historyDir, `${String(nextVersion - 1).padStart(3, "0")}.md`);
|
||||
try {
|
||||
const existing = readFileSync(latestPath, "utf-8");
|
||||
if (existing === plan) {
|
||||
return { version: nextVersion - 1, path: latestPath, isNew: false };
|
||||
}
|
||||
} catch {
|
||||
// File read failed, proceed with saving
|
||||
}
|
||||
}
|
||||
|
||||
const fileName = `${String(nextVersion).padStart(3, "0")}.md`;
|
||||
const filePath = join(historyDir, fileName);
|
||||
writeFileSync(filePath, plan, "utf-8");
|
||||
return { version: nextVersion, path: filePath, isNew: true };
|
||||
}
|
||||
|
||||
/**
|
||||
* Read a specific version's content from history.
|
||||
* Returns null if the version doesn't exist or on read error.
|
||||
*/
|
||||
export function getPlanVersion(
|
||||
project: string,
|
||||
slug: string,
|
||||
version: number
|
||||
): string | null {
|
||||
const historyDir = join(homedir(), ".plannotator", "history", project, slug);
|
||||
const fileName = `${String(version).padStart(3, "0")}.md`;
|
||||
const filePath = join(historyDir, fileName);
|
||||
|
||||
try {
|
||||
return readFileSync(filePath, "utf-8");
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the file path for a specific version in history.
|
||||
* Returns null if the version file doesn't exist.
|
||||
*/
|
||||
export function getPlanVersionPath(
|
||||
project: string,
|
||||
slug: string,
|
||||
version: number
|
||||
): string | null {
|
||||
const historyDir = join(homedir(), ".plannotator", "history", project, slug);
|
||||
const fileName = `${String(version).padStart(3, "0")}.md`;
|
||||
const filePath = join(historyDir, fileName);
|
||||
return existsSync(filePath) ? filePath : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the number of versions stored for a project/slug.
|
||||
* Returns 0 if the directory doesn't exist.
|
||||
*/
|
||||
export function getVersionCount(project: string, slug: string): number {
|
||||
const historyDir = join(homedir(), ".plannotator", "history", project, slug);
|
||||
try {
|
||||
const entries = readdirSync(historyDir);
|
||||
return entries.filter((e) => /^\d+\.md$/.test(e)).length;
|
||||
} catch {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* List all versions for a project/slug with metadata.
|
||||
* Returns versions sorted ascending by version number.
|
||||
*/
|
||||
export function listVersions(
|
||||
project: string,
|
||||
slug: string
|
||||
): Array<{ version: number; timestamp: string }> {
|
||||
const historyDir = join(homedir(), ".plannotator", "history", project, slug);
|
||||
try {
|
||||
const entries = readdirSync(historyDir);
|
||||
const versions: Array<{ version: number; timestamp: string }> = [];
|
||||
for (const entry of entries) {
|
||||
const match = entry.match(/^(\d+)\.md$/);
|
||||
if (match) {
|
||||
const version = parseInt(match[1], 10);
|
||||
const filePath = join(historyDir, entry);
|
||||
try {
|
||||
const stat = statSync(filePath);
|
||||
versions.push({ version, timestamp: stat.mtime.toISOString() });
|
||||
} catch {
|
||||
versions.push({ version, timestamp: "" });
|
||||
}
|
||||
}
|
||||
}
|
||||
return versions.sort((a, b) => a.version - b.version);
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* List all plan slugs stored for a project.
|
||||
* Returns slugs sorted by most recently modified first.
|
||||
*/
|
||||
export function listProjectPlans(
|
||||
project: string
|
||||
): Array<{ slug: string; versions: number; lastModified: string }> {
|
||||
const projectDir = join(homedir(), ".plannotator", "history", project);
|
||||
try {
|
||||
const entries = readdirSync(projectDir, { withFileTypes: true });
|
||||
const plans: Array<{ slug: string; versions: number; lastModified: string }> = [];
|
||||
for (const entry of entries) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
const slugDir = join(projectDir, entry.name);
|
||||
const files = readdirSync(slugDir).filter((f) => /^\d+\.md$/.test(f));
|
||||
if (files.length === 0) continue;
|
||||
|
||||
// Find most recent file modification time
|
||||
let latest = 0;
|
||||
for (const file of files) {
|
||||
try {
|
||||
const mtime = statSync(join(slugDir, file)).mtime.getTime();
|
||||
if (mtime > latest) latest = mtime;
|
||||
} catch { /* skip */ }
|
||||
}
|
||||
|
||||
plans.push({
|
||||
slug: entry.name,
|
||||
versions: files.length,
|
||||
lastModified: latest ? new Date(latest).toISOString() : "",
|
||||
});
|
||||
}
|
||||
return plans.sort((a, b) => b.lastModified.localeCompare(a.lastModified));
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
601
extensions/plannotator/generated/tour-review.ts
Normal file
601
extensions/plannotator/generated/tour-review.ts
Normal file
@@ -0,0 +1,601 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/server/tour/tour-review.ts
|
||||
import { join } from "node:path";
|
||||
import { homedir, tmpdir } from "node:os";
|
||||
import { mkdir, writeFile, readFile, unlink } from "node:fs/promises";
|
||||
import type { DiffType } from "./review-core.js";
|
||||
import type { PRMetadata } from "./pr-provider.js";
|
||||
import type {
|
||||
CodeTourOutput,
|
||||
TourDiffAnchor,
|
||||
TourKeyTakeaway,
|
||||
TourStop,
|
||||
TourQAItem,
|
||||
} from "./tour.js";
|
||||
|
||||
export type { CodeTourOutput, TourDiffAnchor, TourKeyTakeaway, TourStop, TourQAItem };
|
||||
|
||||
export const TOUR_EMPTY_OUTPUT_ERROR = "Tour generation returned empty or malformed output";
|
||||
|
||||
export const TOUR_SCHEMA_JSON = JSON.stringify({
|
||||
type: "object",
|
||||
properties: {
|
||||
title: { type: "string" },
|
||||
greeting: { type: "string" },
|
||||
intent: { type: "string" },
|
||||
before: { type: "string" },
|
||||
after: { type: "string" },
|
||||
key_takeaways: {
|
||||
type: "array",
|
||||
items: {
|
||||
type: "object",
|
||||
properties: {
|
||||
text: { type: "string" },
|
||||
severity: { type: "string", enum: ["info", "important", "warning"] },
|
||||
},
|
||||
required: ["text", "severity"],
|
||||
additionalProperties: false,
|
||||
},
|
||||
},
|
||||
stops: {
|
||||
type: "array",
|
||||
items: {
|
||||
type: "object",
|
||||
properties: {
|
||||
title: { type: "string" },
|
||||
gist: { type: "string" },
|
||||
detail: { type: "string" },
|
||||
transition: { type: "string" },
|
||||
anchors: {
|
||||
type: "array",
|
||||
items: {
|
||||
type: "object",
|
||||
properties: {
|
||||
file: { type: "string" },
|
||||
line: { type: "integer" },
|
||||
end_line: { type: "integer" },
|
||||
hunk: { type: "string" },
|
||||
label: { type: "string" },
|
||||
},
|
||||
required: ["file", "line", "end_line", "hunk", "label"],
|
||||
additionalProperties: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
required: ["title", "gist", "detail", "transition", "anchors"],
|
||||
additionalProperties: false,
|
||||
},
|
||||
},
|
||||
qa_checklist: {
|
||||
type: "array",
|
||||
items: {
|
||||
type: "object",
|
||||
properties: {
|
||||
question: { type: "string" },
|
||||
stop_indices: { type: "array", items: { type: "integer" } },
|
||||
},
|
||||
required: ["question", "stop_indices"],
|
||||
additionalProperties: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
required: ["title", "greeting", "intent", "before", "after", "key_takeaways", "stops", "qa_checklist"],
|
||||
additionalProperties: false,
|
||||
});
|
||||
|
||||
export const TOUR_REVIEW_PROMPT = `# Code Tour Narrator
|
||||
|
||||
## Identity
|
||||
You are a colleague giving a casual, warm tour of work you understand well.
|
||||
Think of it like sitting down next to someone and saying: "Hey Mike, here's
|
||||
the PR. Let me walk you through it." The whole voice is conversational, not
|
||||
documentary. You're telling the story of what changed and why.
|
||||
|
||||
The arguments (like "here's why we did it this way" or "we picked X instead
|
||||
of Y") live INSIDE the stop details, where they belong. The framing (the
|
||||
greeting, intent, before/after, transitions between stops) stays warm and
|
||||
human, the way a coworker actually talks over coffee.
|
||||
|
||||
You are NOT finding bugs. You are NOT writing a technical report.
|
||||
|
||||
## Tone
|
||||
- Conversational throughout. You're talking to a coworker, not writing docs.
|
||||
- Use "we" and "you". "Here's what we changed." "You'll notice that..."
|
||||
- A couple of sentences of context is fine, even for small PRs. If a
|
||||
colleague was describing a one-line change, they wouldn't just say "I
|
||||
changed a line." They'd say "Oh yeah, I bumped the TTL from 7 days to 24
|
||||
hours because the audit flagged it last month." A little color is good.
|
||||
- Each stop should feel like a colleague pausing to point at something:
|
||||
"Okay, look at this part. Here's why it's interesting."
|
||||
- **Do NOT use em-dashes (—) anywhere.** They're a dead giveaway of
|
||||
AI-generated prose. Use commas, colons, semicolons, or separate sentences
|
||||
instead. If you want to add an aside, use parentheses or start a new
|
||||
sentence. Never an em-dash.
|
||||
- No emoji anywhere. The UI handles all visual labeling deterministically.
|
||||
|
||||
## Output structure
|
||||
|
||||
### greeting
|
||||
2-4 sentences welcoming the reviewer and setting the scene. Not a headline,
|
||||
more like how you'd actually open a conversation. "Hey, so this PR does X
|
||||
and Y. Grab a coffee; I'll walk you through it." A bit of warmth and context,
|
||||
even for small changes.
|
||||
Example: "Hey, so this PR tightens the auth session lifetime from a week down
|
||||
to 24 hours. It's small in line count but it's the fix the security team has
|
||||
been asking for since Q1. Let me walk you through it."
|
||||
|
||||
### intent
|
||||
1-3 sentences explaining WHY this changeset exists. What problem is being
|
||||
solved? What motivated the work? Keep it conversational; you're giving
|
||||
context, not writing a ticket.
|
||||
|
||||
To determine intent:
|
||||
- If a PR/MR URL was provided, read the PR description (gh pr view or
|
||||
equivalent). Look for motivation, linked issues, and context the author
|
||||
provided.
|
||||
- If the PR body references a GitHub issue (e.g. "Fixes #123", "Closes
|
||||
owner/repo#456") or GitLab issue, read that specific issue for deeper
|
||||
context.
|
||||
- If no PR is provided, infer intent from commit messages, branch name, and
|
||||
the nature of the changes themselves.
|
||||
- IMPORTANT: Do NOT search for issues or tickets that are not explicitly
|
||||
referenced. Do not browse all open issues. Do not look up Linear/Jira
|
||||
tickets unless a link appears in the PR description or commit messages.
|
||||
Only follow what is given.
|
||||
|
||||
Example: "Closes SEC-412, the overly-permissive session TTL flagged by the
|
||||
security team during the Q1 audit. It also lays some groundwork for the
|
||||
offline-first work shipping next sprint."
|
||||
|
||||
### before / after
|
||||
One to two sentences each. Paint the picture of the world before and after
|
||||
this change. Focus on user or system behavior, not code structure.
|
||||
Example before: "Sessions lasted 7 days, with no refresh contract, so a
|
||||
stolen token was dangerous for a full week."
|
||||
Example after: "Sessions now expire in 24 hours with a clean refresh path,
|
||||
and mobile clients poll every 15 minutes to stay fresh."
|
||||
|
||||
### key_takeaways
|
||||
3 to 5 bullet points. These are the MOST IMPORTANT things someone needs to
|
||||
know at a glance about what this changeset DOES. Focus on what changes in
|
||||
behavior, functionality, or developer experience. Each is ONE sentence. No
|
||||
emoji, no prefix, just the text.
|
||||
|
||||
Severity guide (drives visual styling automatically; pick honestly, don't inflate):
|
||||
- "info": neutral context, good to know.
|
||||
- "important": a meaningful change in behavior, capability, or system contract.
|
||||
- "warning": a behavioral shift worth watching, something that changes how
|
||||
the system works in a way someone could miss. NOT code smells or style
|
||||
nits. A clean changeset with no warnings is perfectly normal.
|
||||
|
||||
### stops
|
||||
Each stop is the colleague pausing at a specific change to explain it.
|
||||
|
||||
#### How to ORDER stops
|
||||
Order by READING FLOW, the order the colleague would walk you through the
|
||||
change to make it understandable. NOT by blast radius or criticality.
|
||||
|
||||
Lead with the entry point: the file or function that, if understood alone,
|
||||
unlocks the rest. Then walk outward:
|
||||
- Definitions before consumers (types/interfaces/schemas before usage).
|
||||
- Cause before effect (the change that motivated downstream changes comes first).
|
||||
- Verification last (tests and migrations after the code they exercise).
|
||||
|
||||
#### How to CHUNK stops
|
||||
A stop is a logical change, NOT a file. If three files changed for one reason,
|
||||
that's ONE stop with three anchors. If one file has two unrelated changes,
|
||||
that's two stops. Never "one-stop-per-file" by default; let logic decide.
|
||||
|
||||
#### Stop fields
|
||||
- **title**: Short, friendly. "Token refresh flow", not "Changes to auth/refresh.ts".
|
||||
- **gist**: ONE sentence. The headline. A reviewer who reads nothing else should
|
||||
understand this stop from the gist alone.
|
||||
- **detail**: This is where the colleague pauses to explain. Supports basic markdown.
|
||||
- Start with 1-2 sentences describing the situation or problem this stop addresses.
|
||||
- Then make the argument: WHY did we change this? WHY does the new code look the
|
||||
way it does? If a non-obvious choice was made (data structure, error strategy,
|
||||
sync vs async, where the logic lives), surface it. "We did X instead of Y
|
||||
because Z" is exactly what the reviewer wants.
|
||||
- Use ### headings (e.g. "### Why this shape") to highlight critical sub-sections.
|
||||
- Use > [!IMPORTANT], > [!WARNING], or > [!NOTE] callout blocks for context
|
||||
that helps the reader understand non-obvious decisions or behavioral shifts
|
||||
(e.g., a new default value, a changed error path, a contract that callers
|
||||
now depend on). These are not for flagging code smells.
|
||||
- Use - bullet points for multi-part changes or parallel considerations.
|
||||
- Keep total length reasonable, around 3-6 sentences equivalent. Don't write
|
||||
an essay.
|
||||
- **transition**: A short connective phrase to the next stop, in the colleague's
|
||||
voice. Examples: "Building on that...", "On a related note...", "To support
|
||||
that change...". Empty string for the last stop.
|
||||
- **anchors**: The specific diff hunks shown inline below the detail narrative.
|
||||
Each anchor MUST have a non-empty "hunk" field containing the actual unified
|
||||
diff text extracted from the changeset. The hunk must include the @@ line.
|
||||
|
||||
Valid hunk format (REQUIRED; every anchor needs this):
|
||||
|
||||
@@ -42,7 +42,9 @@
|
||||
function processRequest(req) {
|
||||
- const result = await fetch(url);
|
||||
- return result.json();
|
||||
+ const result = await fetch(url, { timeout: 5000 });
|
||||
+ if (!result.ok) throw new Error("HTTP " + result.status);
|
||||
+ return result.json();
|
||||
}
|
||||
|
||||
The label should be a substantive 1-sentence explanation of what this code
|
||||
section does or why it matters, not a filename paraphrase.
|
||||
E.g. "Adds a 5-second timeout and explicit error check to prevent silent hangs",
|
||||
not "Changes to request.ts".
|
||||
|
||||
### qa_checklist
|
||||
4 to 8 verification questions a HUMAN can actually answer. Two valid channels:
|
||||
|
||||
1. By READING the code (e.g., "Did we update both call sites of \`legacyAuth()\`?",
|
||||
"Are all uses of the old token format migrated?", "Does the error handler
|
||||
cover the new throw paths?").
|
||||
2. By manually USING the product (e.g., "Sign in, restart the browser, and
|
||||
confirm the session persists.", "Trigger a 503 from the API and confirm the
|
||||
retry banner appears.").
|
||||
|
||||
NOT machine-runnable test ideas. NOT generic "smoke test" framing. The reviewer
|
||||
is a person; what would THEY do to gain confidence?
|
||||
|
||||
Reference which stops each question relates to via stop_indices. Every question
|
||||
should reference at least one stop.
|
||||
|
||||
## Pipeline
|
||||
|
||||
1. Read the full diff (git diff or inlined patch).
|
||||
2. Read CLAUDE.md and README.md for project context.
|
||||
3. Read commit messages (git log --oneline) and PR title/body if available.
|
||||
4. Identify logical groupings of change (cross-file when appropriate). These
|
||||
become stops.
|
||||
5. Determine reading flow order: entry point first, then outward. Definitions
|
||||
before consumers, cause before effect.
|
||||
6. Write the greeting, intent, before/after, takeaways, stops, and checklist
|
||||
in the voice of a coworker walking you through the work.
|
||||
7. Return structured JSON matching the schema.
|
||||
|
||||
## Hard constraints
|
||||
- Every anchor MUST have a non-empty "hunk" field. An anchor with an empty hunk
|
||||
is broken; it will show "diff not available" to the reviewer. Extract the
|
||||
real unified diff text from the input patch. Do not leave hunk blank.
|
||||
- Never fabricate line numbers. Extract them from the diff.
|
||||
- Gist must be ONE sentence. Not two. Not a run-on. One.
|
||||
- Detail supports markdown. Use it when it makes the explanation clearer, not
|
||||
for decoration. Plain prose is fine when the change is simple.
|
||||
- Anchor labels must explain the code's purpose or the change's impact, not
|
||||
just describe the filename.
|
||||
- key_takeaways: 3 to 5 items, each ONE sentence.
|
||||
- Stops are LOGICAL units, not files. Cross-file grouping is expected.
|
||||
- Stop ORDER is reading flow: entry point first, definitions before consumers,
|
||||
cause before effect, verification last.
|
||||
- Combine trivial changes (renames, imports, formatting) into one "Housekeeping"
|
||||
stop at the end, or omit entirely.
|
||||
- QA questions must be answerable by a human, either by reading code or by
|
||||
using the product. Never frame them as automated tests.
|
||||
- NEVER use em-dashes (—) anywhere in the output. Use commas, colons,
|
||||
semicolons, parentheses, or separate sentences. This is a hard constraint.
|
||||
|
||||
## Calibration: tour, not review
|
||||
Your job is to EXPLAIN the changeset, not to critique it. If you genuinely
|
||||
spot a real bug or a meaningful behavioral concern while reading the code,
|
||||
surface it naturally in the relevant stop detail or as a warning takeaway.
|
||||
That's the colleague noticing something worth mentioning. But don't hunt for
|
||||
problems. Most clean changesets should have zero warnings and zero [!WARNING]
|
||||
callouts. The primary question is "what does this change do and why?" not
|
||||
"what's wrong with this code?"`;
|
||||
|
||||
function buildTourUserMessage(
|
||||
patch: string,
|
||||
diffType: DiffType,
|
||||
options?: { defaultBranch?: string; hasLocalAccess?: boolean; prDiffScope?: string },
|
||||
prMetadata?: PRMetadata,
|
||||
): string {
|
||||
if (prMetadata) {
|
||||
if (options?.prDiffScope === "full-stack") {
|
||||
return [
|
||||
`Full-stack tour of ${prMetadata.url}`,
|
||||
"",
|
||||
"This is a stacked PR. The diff below shows ALL accumulated changes from the repository default branch through this PR's head (not just this PR's own layer).",
|
||||
"Walk the reviewer through the complete changeset as a guided tour.",
|
||||
"",
|
||||
"```diff",
|
||||
patch,
|
||||
"```",
|
||||
].join("\n");
|
||||
}
|
||||
if (options?.hasLocalAccess) {
|
||||
return [
|
||||
prMetadata.url,
|
||||
"",
|
||||
"You are in a local worktree checked out at the PR head. The code is available locally.",
|
||||
`To see the PR changes, diff against the remote base branch: git diff origin/${prMetadata.baseBranch}...HEAD`,
|
||||
"Do NOT diff against the local `main` branch; it may be stale. Always use origin/.",
|
||||
"",
|
||||
"Walk the reviewer through this changeset as a guided tour.",
|
||||
].join("\n");
|
||||
}
|
||||
return [prMetadata.url, "", "Walk the reviewer through this PR as a guided tour."].join("\n");
|
||||
}
|
||||
|
||||
const effectiveDiffType = diffType.startsWith("worktree:")
|
||||
? diffType.split(":").pop() || "uncommitted"
|
||||
: diffType;
|
||||
|
||||
switch (effectiveDiffType) {
|
||||
case "uncommitted":
|
||||
return "Walk the reviewer through the current code changes (staged, unstaged, and untracked files) as a guided tour.";
|
||||
case "staged":
|
||||
return "Walk the reviewer through the currently staged code changes (`git diff --staged`) as a guided tour.";
|
||||
case "unstaged":
|
||||
return "Walk the reviewer through the unstaged code changes (tracked modifications and untracked files) as a guided tour.";
|
||||
case "last-commit":
|
||||
return "Walk the reviewer through the code changes introduced in the last commit (`git diff HEAD~1..HEAD`) as a guided tour.";
|
||||
case "branch": {
|
||||
const base = options?.defaultBranch || "main";
|
||||
return `Walk the reviewer through the code changes against the base branch '${base}' as a guided tour. Run \`git diff ${base}..HEAD\` to inspect the changes.`;
|
||||
}
|
||||
case "merge-base": {
|
||||
const base = options?.defaultBranch || "main";
|
||||
return `Walk the reviewer through the PR-style diff against base '${base}' as a guided tour. First find the common ancestor with \`git merge-base ${base} HEAD\`, then run \`git diff <merge-base>..HEAD\` using that commit to inspect only the changes introduced on this branch (matches GitHub's PR view).`;
|
||||
}
|
||||
case "all":
|
||||
return "Walk the reviewer through every file in the repository as a guided tour. All files are shown as additions (diffed against an empty tree).";
|
||||
default:
|
||||
return [
|
||||
"Walk the reviewer through the following code changes as a guided tour.",
|
||||
"",
|
||||
"```diff",
|
||||
patch,
|
||||
"```",
|
||||
].join("\n");
|
||||
}
|
||||
}
|
||||
|
||||
export interface TourClaudeCommandResult {
|
||||
command: string[];
|
||||
stdinPrompt: string;
|
||||
}
|
||||
|
||||
export function buildTourClaudeCommand(prompt: string, model: string = "sonnet", effort?: string): TourClaudeCommandResult {
|
||||
const allowedTools = [
|
||||
"Agent", "Read", "Glob", "Grep",
|
||||
"Bash(git status:*)", "Bash(git diff:*)", "Bash(git log:*)",
|
||||
"Bash(git show:*)", "Bash(git blame:*)", "Bash(git branch:*)",
|
||||
"Bash(git grep:*)", "Bash(git ls-remote:*)", "Bash(git ls-tree:*)",
|
||||
"Bash(git merge-base:*)", "Bash(git remote:*)", "Bash(git rev-parse:*)",
|
||||
"Bash(git show-ref:*)",
|
||||
"Bash(gh pr view:*)", "Bash(gh pr diff:*)", "Bash(gh pr list:*)",
|
||||
"Bash(gh api repos/*/*/pulls/*)", "Bash(gh api repos/*/*/pulls/*/files*)",
|
||||
// The tour prompt follows linked issues (`Fixes #123`, `Closes owner/repo#456`),
|
||||
// so the allowlist has to permit the issue-read commands.
|
||||
"Bash(gh issue view:*)", "Bash(gh api repos/*/*/issues/*)",
|
||||
"Bash(glab mr view:*)", "Bash(glab mr diff:*)",
|
||||
"Bash(glab issue view:*)",
|
||||
"Bash(wc:*)",
|
||||
].join(",");
|
||||
|
||||
const disallowedTools = [
|
||||
"Edit", "Write", "NotebookEdit", "WebFetch", "WebSearch",
|
||||
"Bash(python:*)", "Bash(python3:*)", "Bash(node:*)", "Bash(npx:*)",
|
||||
"Bash(bun:*)", "Bash(bunx:*)", "Bash(sh:*)", "Bash(bash:*)", "Bash(zsh:*)",
|
||||
"Bash(curl:*)", "Bash(wget:*)",
|
||||
].join(",");
|
||||
|
||||
return {
|
||||
command: [
|
||||
"claude", "-p",
|
||||
"--permission-mode", "dontAsk",
|
||||
"--output-format", "stream-json",
|
||||
"--verbose",
|
||||
"--json-schema", TOUR_SCHEMA_JSON,
|
||||
"--no-session-persistence",
|
||||
"--model", model,
|
||||
...(effort ? ["--effort", effort] : []),
|
||||
"--tools", "Agent,Bash,Read,Glob,Grep",
|
||||
"--allowedTools", allowedTools,
|
||||
"--disallowedTools", disallowedTools,
|
||||
],
|
||||
stdinPrompt: prompt,
|
||||
};
|
||||
}
|
||||
|
||||
const TOUR_SCHEMA_DIR = join(homedir(), ".plannotator");
|
||||
const TOUR_SCHEMA_FILE = join(TOUR_SCHEMA_DIR, "tour-schema.json");
|
||||
let tourSchemaMaterialized = false;
|
||||
|
||||
async function ensureTourSchemaFile(): Promise<string> {
|
||||
if (!tourSchemaMaterialized) {
|
||||
await mkdir(TOUR_SCHEMA_DIR, { recursive: true });
|
||||
await writeFile(TOUR_SCHEMA_FILE, TOUR_SCHEMA_JSON);
|
||||
tourSchemaMaterialized = true;
|
||||
}
|
||||
return TOUR_SCHEMA_FILE;
|
||||
}
|
||||
|
||||
export function generateTourOutputPath(): string {
|
||||
return join(tmpdir(), `plannotator-tour-${crypto.randomUUID()}.json`);
|
||||
}
|
||||
|
||||
export async function buildTourCodexCommand(options: {
|
||||
cwd: string;
|
||||
outputPath: string;
|
||||
prompt: string;
|
||||
model?: string;
|
||||
reasoningEffort?: string;
|
||||
fastMode?: boolean;
|
||||
}): Promise<string[]> {
|
||||
const { cwd, outputPath, prompt, model, reasoningEffort, fastMode } = options;
|
||||
const schemaPath = await ensureTourSchemaFile();
|
||||
|
||||
const command = [
|
||||
"codex",
|
||||
// Global flags must precede the "exec" subcommand for the Codex CLI.
|
||||
...(model ? ["-m", model] : []),
|
||||
...(reasoningEffort ? ["-c", `model_reasoning_effort=${reasoningEffort}`] : []),
|
||||
...(fastMode ? ["-c", "service_tier=fast"] : []),
|
||||
"exec",
|
||||
"--output-schema", schemaPath,
|
||||
"-o", outputPath,
|
||||
"--full-auto", "--ephemeral",
|
||||
"-C", cwd,
|
||||
prompt,
|
||||
];
|
||||
|
||||
return command;
|
||||
}
|
||||
|
||||
export function parseTourStreamOutput(stdout: string): CodeTourOutput | null {
|
||||
if (!stdout.trim()) return null;
|
||||
|
||||
const lines = stdout.trim().split('\n');
|
||||
for (let i = lines.length - 1; i >= 0; i--) {
|
||||
const line = lines[i].trim();
|
||||
if (!line) continue;
|
||||
|
||||
try {
|
||||
const event = JSON.parse(line);
|
||||
if (event.type === 'result') {
|
||||
if (event.is_error) return null;
|
||||
const output = event.structured_output;
|
||||
// A tour with no stops isn't a tour — treat as invalid so the UI
|
||||
// error state fires instead of rendering an empty walkthrough.
|
||||
if (!output || !Array.isArray(output.stops) || output.stops.length === 0) return null;
|
||||
return output as CodeTourOutput;
|
||||
}
|
||||
} catch {
|
||||
// Not valid JSON — skip
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
export async function parseTourFileOutput(outputPath: string): Promise<CodeTourOutput | null> {
|
||||
try {
|
||||
const text = await readFile(outputPath, "utf-8");
|
||||
try { await unlink(outputPath); } catch { /* ignore */ }
|
||||
if (!text.trim()) return null;
|
||||
const parsed = JSON.parse(text);
|
||||
// A tour with no stops isn't a tour — treat as invalid so the UI
|
||||
// error state fires instead of rendering an empty walkthrough.
|
||||
if (!parsed || !Array.isArray(parsed.stops) || parsed.stops.length === 0) return null;
|
||||
return parsed as CodeTourOutput;
|
||||
} catch {
|
||||
try { await unlink(outputPath); } catch { /* ignore */ }
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
export interface TourSessionBuildCommandOptions {
|
||||
cwd: string;
|
||||
patch: string;
|
||||
diffType: DiffType;
|
||||
options?: { defaultBranch?: string; hasLocalAccess?: boolean };
|
||||
prMetadata?: PRMetadata;
|
||||
config?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
export interface TourSessionBuildCommandResult {
|
||||
command: string[];
|
||||
outputPath?: string;
|
||||
captureStdout?: boolean;
|
||||
stdinPrompt?: string;
|
||||
cwd?: string;
|
||||
label?: string;
|
||||
prompt?: string;
|
||||
engine: "claude" | "codex";
|
||||
model: string;
|
||||
effort?: string;
|
||||
reasoningEffort?: string;
|
||||
fastMode?: boolean;
|
||||
}
|
||||
|
||||
export interface TourSessionJobSummary {
|
||||
correctness: string;
|
||||
explanation: string;
|
||||
confidence: number;
|
||||
}
|
||||
|
||||
export interface TourSessionJobRef {
|
||||
id: string;
|
||||
engine?: string;
|
||||
}
|
||||
|
||||
export interface TourSessionOnJobCompleteOptions {
|
||||
job: TourSessionJobRef;
|
||||
meta: { outputPath?: string; stdout?: string };
|
||||
}
|
||||
|
||||
export interface TourSession {
|
||||
tourResults: Map<string, CodeTourOutput>;
|
||||
tourChecklists: Map<string, boolean[]>;
|
||||
buildCommand(opts: TourSessionBuildCommandOptions): Promise<TourSessionBuildCommandResult>;
|
||||
onJobComplete(opts: TourSessionOnJobCompleteOptions): Promise<{ summary: TourSessionJobSummary | null }>;
|
||||
getTour(jobId: string): (CodeTourOutput & { checklist: boolean[] }) | null;
|
||||
saveChecklist(jobId: string, checked: boolean[]): void;
|
||||
}
|
||||
|
||||
export function createTourSession(): TourSession {
|
||||
const tourResults = new Map<string, CodeTourOutput>();
|
||||
const tourChecklists = new Map<string, boolean[]>();
|
||||
|
||||
return {
|
||||
tourResults,
|
||||
tourChecklists,
|
||||
|
||||
async buildCommand({ cwd, patch, diffType, options, prMetadata, config }) {
|
||||
const engine = (typeof config?.engine === "string" ? config.engine : "claude") as "claude" | "codex";
|
||||
const explicitModel = typeof config?.model === "string" && config.model ? config.model : null;
|
||||
// "sonnet" is a Claude model, so we must NOT pass it to Codex when no model
|
||||
// is explicitly selected. Leave Codex model blank and let its CLI default pick.
|
||||
const model = explicitModel ?? (engine === "codex" ? "" : "sonnet");
|
||||
const reasoningEffort = typeof config?.reasoningEffort === "string" && config.reasoningEffort ? config.reasoningEffort : undefined;
|
||||
const effort = typeof config?.effort === "string" && config.effort ? config.effort : undefined;
|
||||
const fastMode = config?.fastMode === true;
|
||||
const userMessage = buildTourUserMessage(patch, diffType, options, prMetadata);
|
||||
const prompt = TOUR_REVIEW_PROMPT + "\n\n---\n\n" + userMessage;
|
||||
|
||||
if (engine === "codex") {
|
||||
const outputPath = generateTourOutputPath();
|
||||
const command = await buildTourCodexCommand({ cwd, outputPath, prompt, model: model || undefined, reasoningEffort, fastMode });
|
||||
return { command, outputPath, prompt, label: "Code Tour", engine: "codex", model, reasoningEffort, fastMode: fastMode || undefined };
|
||||
}
|
||||
|
||||
const { command, stdinPrompt } = buildTourClaudeCommand(prompt, model, effort);
|
||||
return { command, stdinPrompt, prompt, cwd, label: "Code Tour", captureStdout: true, engine: "claude", model, effort };
|
||||
},
|
||||
|
||||
async onJobComplete({ job, meta }) {
|
||||
let output: CodeTourOutput | null = null;
|
||||
if (job.engine === "codex" && meta.outputPath) {
|
||||
output = await parseTourFileOutput(meta.outputPath);
|
||||
} else if (meta.stdout) {
|
||||
output = parseTourStreamOutput(meta.stdout);
|
||||
}
|
||||
|
||||
if (!output) {
|
||||
console.error(`[tour] Failed to parse output for job ${job.id}`);
|
||||
return { summary: null };
|
||||
}
|
||||
|
||||
tourResults.set(job.id, output);
|
||||
const summary: TourSessionJobSummary = {
|
||||
correctness: "Tour Generated",
|
||||
explanation: `${output.stops.length} stop${output.stops.length !== 1 ? "s" : ""}, ${output.qa_checklist.length} QA item${output.qa_checklist.length !== 1 ? "s" : ""}`,
|
||||
confidence: 1.0,
|
||||
};
|
||||
return { summary };
|
||||
},
|
||||
|
||||
getTour(jobId) {
|
||||
const tour = tourResults.get(jobId);
|
||||
if (!tour) return null;
|
||||
return { ...tour, checklist: tourChecklists.get(jobId) ?? [] };
|
||||
},
|
||||
|
||||
saveChecklist(jobId, checked) {
|
||||
tourChecklists.set(jobId, checked);
|
||||
},
|
||||
};
|
||||
}
|
||||
62
extensions/plannotator/generated/tour.ts
Normal file
62
extensions/plannotator/generated/tour.ts
Normal file
@@ -0,0 +1,62 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/tour.ts
|
||||
export interface TourDiffAnchor {
|
||||
/** Relative file path within the repo. */
|
||||
file: string;
|
||||
/** Start line in the new file (post-change). */
|
||||
line: number;
|
||||
/** End line in the new file. */
|
||||
end_line: number;
|
||||
/** Raw unified diff hunk for this anchor. */
|
||||
hunk: string;
|
||||
/** One-line chip label, e.g. "Add retry logic". */
|
||||
label: string;
|
||||
}
|
||||
|
||||
export interface TourKeyTakeaway {
|
||||
/** One sentence — the takeaway. */
|
||||
text: string;
|
||||
/** Severity for visual styling. */
|
||||
severity: "info" | "important" | "warning";
|
||||
}
|
||||
|
||||
export interface TourStop {
|
||||
/** Short chapter title, friendly tone. */
|
||||
title: string;
|
||||
/** ONE sentence — the headline for this stop. Scannable without expanding. */
|
||||
gist: string;
|
||||
/** 2-3 sentences of additional context. Only shown when expanded. */
|
||||
detail: string;
|
||||
/** Connective phrase to the next stop, e.g. "Building on that..." (empty for last stop). */
|
||||
transition: string;
|
||||
/** Diff anchors — the code locations this stop references. */
|
||||
anchors: TourDiffAnchor[];
|
||||
}
|
||||
|
||||
export interface TourQAItem {
|
||||
/** Product-level verification question. */
|
||||
question: string;
|
||||
/** Indices into stops[] that this question relates to. */
|
||||
stop_indices: number[];
|
||||
}
|
||||
|
||||
export interface CodeTourOutput {
|
||||
/** One-line title for the entire tour. */
|
||||
title: string;
|
||||
/** 1-2 sentence friendly greeting + summary. Conversational, not formal. */
|
||||
greeting: string;
|
||||
/** 1-3 sentences: why this changeset exists — the motivation/problem being solved. */
|
||||
intent: string;
|
||||
/** What things looked like before this changeset — one sentence. */
|
||||
before: string;
|
||||
/** What things look like after — one sentence. */
|
||||
after: string;
|
||||
/** 3-5 key takeaways — the most critical info, scannable at a glance. */
|
||||
key_takeaways: TourKeyTakeaway[];
|
||||
/** Ordered tour stops — the detailed walk-through. */
|
||||
stops: TourStop[];
|
||||
/** Product-level QA checklist. */
|
||||
qa_checklist: TourQAItem[];
|
||||
}
|
||||
|
||||
/** UI-side tour shape: server output extended with persisted checklist state. */
|
||||
export type CodeTourData = CodeTourOutput & { checklist: boolean[] };
|
||||
352
extensions/plannotator/generated/url-to-markdown.ts
Normal file
352
extensions/plannotator/generated/url-to-markdown.ts
Normal file
@@ -0,0 +1,352 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/url-to-markdown.ts
|
||||
/**
|
||||
* URL-to-Markdown conversion.
|
||||
*
|
||||
* Fetches a URL via Jina Reader (default) or plain fetch + Turndown,
|
||||
* returning clean markdown for the annotation pipeline.
|
||||
*/
|
||||
|
||||
import { htmlToMarkdown } from "./html-to-markdown";
|
||||
|
||||
export interface UrlToMarkdownOptions {
|
||||
/** Whether to use Jina Reader (true) or plain fetch+Turndown (false). */
|
||||
useJina: boolean;
|
||||
}
|
||||
|
||||
export interface UrlToMarkdownResult {
|
||||
markdown: string;
|
||||
source: "jina" | "fetch+turndown" | "fetch-raw" | "content-negotiation";
|
||||
}
|
||||
|
||||
/** True when the source indicates the markdown was converted from HTML,
|
||||
* not returned as-is from the origin. */
|
||||
export const isConvertedSource = (source: UrlToMarkdownResult["source"]): boolean =>
|
||||
source === "jina" || source === "fetch+turndown";
|
||||
|
||||
const FETCH_TIMEOUT_MS = 30_000;
|
||||
const MAX_BODY_BYTES = 10 * 1024 * 1024; // 10 MB — matches local HTML file guard
|
||||
|
||||
/**
|
||||
* Skip Jina for local/private URLs — fetch them directly instead.
|
||||
*
|
||||
* IMPORTANT — IPv6 hostname format (verified empirically in Bun 1.3.11 and Node 22):
|
||||
* The WHATWG URL `hostname` getter returns IPv6 addresses WITH brackets.
|
||||
* This is why PRIVATE_IPV6 uses `^\[` — it matches the actual runtime output.
|
||||
*
|
||||
* Verified outputs (both Bun and Node return identical results):
|
||||
* new URL("http://[::1]:3000/").hostname → "[::1]"
|
||||
* new URL("http://[fe80::1]/").hostname → "[fe80::1]"
|
||||
* new URL("http://[fc00::1]/").hostname → "[fc00::1]"
|
||||
* new URL("http://[fd12::1]/").hostname → "[fd12::1]"
|
||||
* new URL("http://[::ffff:192.168.0.1]/").hostname → "[::ffff:c0a8:1]"
|
||||
* new URL("http://[::ffff:169.254.169.254]/").hostname → "[::ffff:a9fe:a9fe]"
|
||||
*
|
||||
* The unbracketed "::1" check (line below) covers the edge case defensively.
|
||||
*/
|
||||
const PRIVATE_IPV4 = /^(10\.\d{1,3}|192\.168|172\.(1[6-9]|2\d|3[01])|169\.254)\.\d{1,3}\.\d{1,3}$/;
|
||||
// Bracketed IPv6 private/reserved prefixes (matches WHATWG URL hostname getter output).
|
||||
// fc00::/7 covers fc00:: through fdff::, so match [fc or [fd prefix.
|
||||
const PRIVATE_IPV6 = /^\[(::1|::ffff:|fe80:|fc[0-9a-f]{2}:|fd[0-9a-f]{2}:)/i;
|
||||
function isLocalUrl(url: string): boolean {
|
||||
try {
|
||||
const { hostname } = new URL(url);
|
||||
if (
|
||||
hostname === "localhost" ||
|
||||
hostname === "::1" ||
|
||||
hostname === "[::1]" ||
|
||||
hostname === "0.0.0.0" ||
|
||||
hostname.endsWith(".local") ||
|
||||
/^127\./.test(hostname) ||
|
||||
PRIVATE_IPV4.test(hostname)
|
||||
) {
|
||||
return true;
|
||||
}
|
||||
// IPv6 private ranges: link-local (fe80::), unique-local (fc00::/fd00::),
|
||||
// and IPv4-mapped (::ffff:) which embeds private IPv4 in hex notation
|
||||
if (PRIVATE_IPV6.test(hostname)) return true;
|
||||
return false;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch a URL and return its content as markdown.
|
||||
*
|
||||
* When `useJina` is true, attempts Jina Reader first (returns markdown
|
||||
* directly, handles JS-rendered pages). On failure, warns to stderr
|
||||
* and falls back to plain fetch + Turndown.
|
||||
*/
|
||||
export async function urlToMarkdown(
|
||||
url: string,
|
||||
options: UrlToMarkdownOptions,
|
||||
): Promise<UrlToMarkdownResult> {
|
||||
// URLs pointing to markdown files — fetch raw if the server returns plain text.
|
||||
// If the server returns HTML (e.g. GitHub's .md viewer), fall through to Jina/Turndown.
|
||||
const urlPath = url.split("?")[0].split("#")[0];
|
||||
if (/\.mdx?$/i.test(urlPath)) {
|
||||
const text = await fetchRawText(url);
|
||||
if (text !== null) {
|
||||
return { markdown: text, source: "fetch-raw" };
|
||||
}
|
||||
// Server returned HTML for this .md URL — fall through to normal conversion
|
||||
}
|
||||
|
||||
// Content negotiation fast path — if the server natively returns markdown
|
||||
// (e.g. Cloudflare's Markdown for Agents), skip Jina/Turndown entirely.
|
||||
const local = isLocalUrl(url);
|
||||
if (!local) {
|
||||
const negotiated = await fetchViaContentNegotiation(url);
|
||||
if (negotiated !== null) {
|
||||
return { markdown: negotiated, source: "content-negotiation" };
|
||||
}
|
||||
}
|
||||
|
||||
if (options.useJina && !local) {
|
||||
try {
|
||||
const markdown = await fetchViaJina(url);
|
||||
return { markdown, source: "jina" };
|
||||
} catch (err) {
|
||||
process.stderr.write(
|
||||
`[plannotator] Warning: Jina Reader failed (${err instanceof Error ? err.message : String(err)}), falling back to direct fetch...\n`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
const markdown = await fetchViaTurndown(url);
|
||||
return { markdown, source: "fetch+turndown" };
|
||||
}
|
||||
|
||||
/** Read response body with a size limit. Throws if the body exceeds MAX_BODY_BYTES. */
|
||||
async function readBodyWithLimit(res: Response): Promise<string> {
|
||||
const contentLength = res.headers.get("content-length");
|
||||
if (contentLength) {
|
||||
const bytes = parseInt(contentLength, 10);
|
||||
if (bytes > MAX_BODY_BYTES) {
|
||||
res.body?.cancel();
|
||||
throw new Error(`Response too large (${Math.round(bytes / 1024 / 1024)}MB, max 10MB)`);
|
||||
}
|
||||
}
|
||||
const reader = res.body?.getReader();
|
||||
if (!reader) {
|
||||
// Null body is rare (e.g. manually constructed Response). Still enforce
|
||||
// the size limit via the text result length as a best-effort fallback.
|
||||
const text = await res.text();
|
||||
if (text.length > MAX_BODY_BYTES) {
|
||||
throw new Error(`Response too large (>${Math.round(MAX_BODY_BYTES / 1024 / 1024)}MB, max 10MB)`);
|
||||
}
|
||||
return text;
|
||||
}
|
||||
|
||||
const chunks: Uint8Array[] = [];
|
||||
let totalBytes = 0;
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
totalBytes += value.byteLength;
|
||||
if (totalBytes > MAX_BODY_BYTES) {
|
||||
reader.cancel();
|
||||
throw new Error(`Response too large (>${Math.round(MAX_BODY_BYTES / 1024 / 1024)}MB, max 10MB)`);
|
||||
}
|
||||
chunks.push(value);
|
||||
}
|
||||
return new TextDecoder().decode(Buffer.concat(chunks));
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch a URL as raw text — for .md/.mdx URLs that are already markdown.
|
||||
* Returns null if the server returns HTML (e.g. GitHub's viewer page for
|
||||
* a .md file), signaling the caller to fall through to Jina/Turndown.
|
||||
*
|
||||
* Uses redirect: "manual" with isLocalUrl validation on each hop —
|
||||
* same SSRF protection as fetchViaTurndown.
|
||||
*/
|
||||
async function fetchRawText(url: string): Promise<string | null> {
|
||||
const controller = new AbortController();
|
||||
const timer = setTimeout(() => controller.abort(), FETCH_TIMEOUT_MS);
|
||||
const headers = { "User-Agent": "Mozilla/5.0 (compatible; Plannotator/1.0; +https://plannotator.ai)" };
|
||||
try {
|
||||
let currentUrl = url;
|
||||
let res = await fetch(currentUrl, { headers, redirect: "manual", signal: controller.signal });
|
||||
|
||||
for (let i = 0; i < MAX_REDIRECTS && REDIRECT_STATUSES.has(res.status); i++) {
|
||||
const location = res.headers.get("location");
|
||||
if (!location) break;
|
||||
currentUrl = new URL(location, currentUrl).href;
|
||||
if (isLocalUrl(currentUrl)) {
|
||||
throw new Error(`Redirect to private/local URL blocked: ${currentUrl}`);
|
||||
}
|
||||
res.body?.cancel();
|
||||
res = await fetch(currentUrl, { headers, redirect: "manual", signal: controller.signal });
|
||||
}
|
||||
|
||||
if (REDIRECT_STATUSES.has(res.status)) {
|
||||
res.body?.cancel();
|
||||
throw new Error("Too many redirects");
|
||||
}
|
||||
if (!res.ok) {
|
||||
res.body?.cancel();
|
||||
throw new Error(`HTTP ${res.status} ${res.statusText}`);
|
||||
}
|
||||
// If server returns HTML (e.g. GitHub's .md viewer), signal caller to
|
||||
// fall through to Jina/Turndown instead of using raw content
|
||||
const ct = res.headers.get("content-type") || "";
|
||||
if (ct.includes("text/html") || ct.includes("application/xhtml+xml")) {
|
||||
res.body?.cancel();
|
||||
return null;
|
||||
}
|
||||
return await readBodyWithLimit(res);
|
||||
} catch (err) {
|
||||
if (err instanceof Error && err.name === "AbortError") {
|
||||
throw new Error(`Timed out fetching ${url}`);
|
||||
}
|
||||
throw err;
|
||||
} finally {
|
||||
clearTimeout(timer);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Content negotiation fast path — request `text/markdown` via the Accept header.
|
||||
* Sites that support Cloudflare's "Markdown for Agents" (or similar) will return
|
||||
* markdown directly, letting us skip Jina and Turndown entirely.
|
||||
* Returns null if the server doesn't serve markdown.
|
||||
*/
|
||||
const NEGOTIATION_TIMEOUT_MS = 5_000; // Short timeout — this is a best-effort optimization
|
||||
|
||||
async function fetchViaContentNegotiation(url: string): Promise<string | null> {
|
||||
const controller = new AbortController();
|
||||
const timer = setTimeout(() => controller.abort(), NEGOTIATION_TIMEOUT_MS);
|
||||
const headers = {
|
||||
"User-Agent": "Mozilla/5.0 (compatible; Plannotator/1.0; +https://plannotator.ai)",
|
||||
Accept: "text/markdown, text/html;q=0.9",
|
||||
};
|
||||
|
||||
try {
|
||||
let currentUrl = url;
|
||||
let res = await fetch(currentUrl, { headers, redirect: "manual", signal: controller.signal });
|
||||
|
||||
for (let i = 0; i < MAX_REDIRECTS && REDIRECT_STATUSES.has(res.status); i++) {
|
||||
const location = res.headers.get("location");
|
||||
if (!location) break;
|
||||
currentUrl = new URL(location, currentUrl).href;
|
||||
if (isLocalUrl(currentUrl)) {
|
||||
res.body?.cancel();
|
||||
return null;
|
||||
}
|
||||
res.body?.cancel();
|
||||
res = await fetch(currentUrl, { headers, redirect: "manual", signal: controller.signal });
|
||||
}
|
||||
|
||||
if (!res.ok) {
|
||||
res.body?.cancel();
|
||||
return null;
|
||||
}
|
||||
|
||||
const ct = res.headers.get("content-type") || "";
|
||||
if (!ct.includes("text/markdown")) {
|
||||
res.body?.cancel();
|
||||
return null;
|
||||
}
|
||||
|
||||
return await readBodyWithLimit(res);
|
||||
} catch {
|
||||
return null;
|
||||
} finally {
|
||||
clearTimeout(timer);
|
||||
}
|
||||
}
|
||||
|
||||
/** Fetch via Jina Reader — returns markdown directly. */
|
||||
async function fetchViaJina(url: string): Promise<string> {
|
||||
// Strip fragment (never sent to server) and encode for Jina's path-based API
|
||||
const cleanUrl = url.split("#")[0];
|
||||
const jinaUrl = `https://r.jina.ai/${cleanUrl}`;
|
||||
const headers: Record<string, string> = {
|
||||
Accept: "text/plain",
|
||||
};
|
||||
|
||||
const apiKey = process.env.JINA_API_KEY;
|
||||
if (apiKey) {
|
||||
headers.Authorization = `Bearer ${apiKey}`;
|
||||
}
|
||||
|
||||
const controller = new AbortController();
|
||||
const timer = setTimeout(() => controller.abort(), FETCH_TIMEOUT_MS);
|
||||
|
||||
try {
|
||||
const res = await fetch(jinaUrl, { headers, signal: controller.signal });
|
||||
if (!res.ok) {
|
||||
res.body?.cancel();
|
||||
throw new Error(`HTTP ${res.status} ${res.statusText}`);
|
||||
}
|
||||
return await readBodyWithLimit(res);
|
||||
} catch (err) {
|
||||
if (err instanceof Error && err.name === "AbortError") {
|
||||
throw new Error("timed out");
|
||||
}
|
||||
throw err;
|
||||
} finally {
|
||||
clearTimeout(timer);
|
||||
}
|
||||
}
|
||||
|
||||
const MAX_REDIRECTS = 10;
|
||||
const REDIRECT_STATUSES = new Set([301, 302, 303, 307, 308]);
|
||||
|
||||
/** Fetch raw HTML and convert via Turndown. Follows redirects manually to validate each hop. */
|
||||
async function fetchViaTurndown(url: string): Promise<string> {
|
||||
const controller = new AbortController();
|
||||
const timer = setTimeout(() => controller.abort(), FETCH_TIMEOUT_MS);
|
||||
|
||||
const headers = {
|
||||
"User-Agent":
|
||||
"Mozilla/5.0 (compatible; Plannotator/1.0; +https://plannotator.ai)",
|
||||
Accept: "text/html,application/xhtml+xml",
|
||||
};
|
||||
|
||||
try {
|
||||
let currentUrl = url;
|
||||
let res = await fetch(currentUrl, { headers, redirect: "manual", signal: controller.signal });
|
||||
|
||||
for (let i = 0; i < MAX_REDIRECTS && REDIRECT_STATUSES.has(res.status); i++) {
|
||||
const location = res.headers.get("location");
|
||||
if (!location) break;
|
||||
|
||||
currentUrl = new URL(location, currentUrl).href;
|
||||
if (isLocalUrl(currentUrl)) {
|
||||
throw new Error(`Redirect to private/local URL blocked: ${currentUrl}`);
|
||||
}
|
||||
res.body?.cancel();
|
||||
res = await fetch(currentUrl, { headers, redirect: "manual", signal: controller.signal });
|
||||
}
|
||||
|
||||
if (REDIRECT_STATUSES.has(res.status)) {
|
||||
res.body?.cancel();
|
||||
throw new Error("Too many redirects");
|
||||
}
|
||||
if (!res.ok) {
|
||||
res.body?.cancel();
|
||||
throw new Error(`HTTP ${res.status} ${res.statusText}`);
|
||||
}
|
||||
const contentType = res.headers.get("content-type") || "";
|
||||
if (
|
||||
!contentType.includes("text/html") &&
|
||||
!contentType.includes("application/xhtml+xml")
|
||||
) {
|
||||
res.body?.cancel();
|
||||
throw new Error(
|
||||
`Not an HTML page (content-type: ${contentType})`,
|
||||
);
|
||||
}
|
||||
const html = await readBodyWithLimit(res);
|
||||
return htmlToMarkdown(html);
|
||||
} catch (err) {
|
||||
if (err instanceof Error && err.name === "AbortError") {
|
||||
throw new Error(`Timed out fetching ${url}`);
|
||||
}
|
||||
throw err;
|
||||
} finally {
|
||||
clearTimeout(timer);
|
||||
}
|
||||
}
|
||||
104
extensions/plannotator/generated/worktree-pool.ts
Normal file
104
extensions/plannotator/generated/worktree-pool.ts
Normal file
@@ -0,0 +1,104 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/worktree-pool.ts
|
||||
/**
|
||||
* Worktree Pool — manages a set of per-PR git worktrees for a review session.
|
||||
*
|
||||
* Runtime-agnostic. Uses ReviewGitRuntime for all git operations.
|
||||
* Both Bun and Pi servers import this module (Pi via vendor.sh).
|
||||
*
|
||||
* Each PR visited during a session gets its own worktree, created on first
|
||||
* access and cached for the session lifetime. Agents run in their PR's
|
||||
* worktree undisturbed by PR switches.
|
||||
*/
|
||||
|
||||
import { join } from "node:path";
|
||||
import type { ReviewGitRuntime } from "./review-core";
|
||||
import type { PRMetadata } from "./pr-provider";
|
||||
import { createWorktree, removeWorktree, fetchRef, ensureObjectAvailable } from "./worktree";
|
||||
|
||||
export interface PoolEntry {
|
||||
path: string;
|
||||
prUrl: string;
|
||||
number: number;
|
||||
ready: boolean;
|
||||
}
|
||||
|
||||
export interface WorktreePoolConfig {
|
||||
sessionDir: string;
|
||||
repoDir: string;
|
||||
isSameRepo: boolean;
|
||||
}
|
||||
|
||||
export interface WorktreePool {
|
||||
get(prUrl: string): PoolEntry | undefined;
|
||||
has(prUrl: string): boolean;
|
||||
resolve(prUrl: string): string | undefined;
|
||||
ensure(runtime: ReviewGitRuntime, metadata: PRMetadata): Promise<PoolEntry>;
|
||||
entries(): IterableIterator<PoolEntry>;
|
||||
cleanup(runtime: ReviewGitRuntime): Promise<void>;
|
||||
}
|
||||
|
||||
export function createWorktreePool(config: WorktreePoolConfig, initial?: PoolEntry): WorktreePool {
|
||||
const pool = new Map<string, PoolEntry>();
|
||||
const pending = new Map<string, Promise<PoolEntry>>();
|
||||
if (initial) pool.set(initial.prUrl, initial);
|
||||
|
||||
return {
|
||||
get(prUrl) { return pool.get(prUrl); },
|
||||
has(prUrl) { return pool.has(prUrl); },
|
||||
resolve(prUrl) {
|
||||
const entry = pool.get(prUrl);
|
||||
return entry?.ready ? entry.path : undefined;
|
||||
},
|
||||
|
||||
async ensure(runtime, metadata) {
|
||||
const existing = pool.get(metadata.url);
|
||||
if (existing?.ready) return existing;
|
||||
|
||||
const inflight = pending.get(metadata.url);
|
||||
if (inflight) return inflight;
|
||||
|
||||
if (!config.isSameRepo) {
|
||||
throw new Error("Cross-repo pool cannot create worktrees for other PRs");
|
||||
}
|
||||
|
||||
const promise = (async (): Promise<PoolEntry> => {
|
||||
const number = metadata.platform === "github" ? metadata.number : metadata.iid;
|
||||
const worktreePath = join(config.sessionDir, "pool", `pr-${number}`);
|
||||
const refSpec = metadata.platform === "github"
|
||||
? `refs/pull/${number}/head`
|
||||
: `refs/merge-requests/${number}/head`;
|
||||
|
||||
await fetchRef(runtime, metadata.baseBranch, { cwd: config.repoDir });
|
||||
await ensureObjectAvailable(runtime, metadata.baseSha, { cwd: config.repoDir });
|
||||
await fetchRef(runtime, refSpec, { cwd: config.repoDir });
|
||||
|
||||
await createWorktree(runtime, {
|
||||
ref: "FETCH_HEAD",
|
||||
path: worktreePath,
|
||||
detach: true,
|
||||
cwd: config.repoDir,
|
||||
});
|
||||
|
||||
const entry: PoolEntry = { path: worktreePath, prUrl: metadata.url, number, ready: true };
|
||||
pool.set(metadata.url, entry);
|
||||
return entry;
|
||||
})();
|
||||
|
||||
pending.set(metadata.url, promise);
|
||||
try {
|
||||
return await promise;
|
||||
} finally {
|
||||
pending.delete(metadata.url);
|
||||
}
|
||||
},
|
||||
|
||||
entries() { return pool.values(); },
|
||||
|
||||
async cleanup(runtime) {
|
||||
for (const entry of pool.values()) {
|
||||
await removeWorktree(runtime, entry.path, { force: true, cwd: config.repoDir });
|
||||
}
|
||||
pool.clear();
|
||||
},
|
||||
};
|
||||
}
|
||||
120
extensions/plannotator/generated/worktree.ts
Normal file
120
extensions/plannotator/generated/worktree.ts
Normal file
@@ -0,0 +1,120 @@
|
||||
// @generated — DO NOT EDIT. Source: packages/shared/worktree.ts
|
||||
/**
|
||||
* Worktree — runtime-agnostic git worktree primitives.
|
||||
*
|
||||
* Uses ReviewGitRuntime so both Bun and Node runtimes can use the same logic.
|
||||
* Lives in packages/shared/ and gets vendored to Pi via vendor.sh.
|
||||
*
|
||||
* Designed as composable primitives, not tied to any specific use case.
|
||||
* PR local checkout, agent sandboxes, parallel sessions — all compose from these.
|
||||
*/
|
||||
|
||||
import type { ReviewGitRuntime } from "./review-core";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface CreateWorktreeOptions {
|
||||
/** Git ref to check out (branch name, SHA, FETCH_HEAD, etc.) */
|
||||
ref: string;
|
||||
/** Absolute path where the worktree will be created. */
|
||||
path: string;
|
||||
/** Create in detached HEAD mode (no branch). Default: false. */
|
||||
detach?: boolean;
|
||||
/** CWD of the source repository. Defaults to process.cwd(). */
|
||||
cwd?: string;
|
||||
}
|
||||
|
||||
export interface RemoveWorktreeOptions {
|
||||
/** Force removal even if the worktree has modifications. Default: false. */
|
||||
force?: boolean;
|
||||
/** CWD of the source repository. Required if the worktree was created from a different cwd. */
|
||||
cwd?: string;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Primitives
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Fetch a ref from origin.
|
||||
* Runs: `git fetch origin <ref>`
|
||||
* Throws on failure.
|
||||
*/
|
||||
export async function fetchRef(
|
||||
runtime: ReviewGitRuntime,
|
||||
ref: string,
|
||||
options?: { cwd?: string },
|
||||
): Promise<void> {
|
||||
const result = await runtime.runGit(["fetch", "origin", "--", ref], { cwd: options?.cwd });
|
||||
if (result.exitCode !== 0) {
|
||||
throw new Error(`git fetch origin ${ref} failed: ${result.stderr.trim() || `exit code ${result.exitCode}`}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure a git object (commit SHA) is available locally.
|
||||
* Checks with `git cat-file -t`, fetches from origin if missing.
|
||||
* Returns true if the object is available after the attempt.
|
||||
*/
|
||||
export async function ensureObjectAvailable(
|
||||
runtime: ReviewGitRuntime,
|
||||
sha: string,
|
||||
options?: { cwd?: string },
|
||||
): Promise<boolean> {
|
||||
const check = await runtime.runGit(["cat-file", "-t", sha], { cwd: options?.cwd });
|
||||
if (check.exitCode === 0) return true;
|
||||
|
||||
// Object missing locally — try fetching it
|
||||
const fetch = await runtime.runGit(["fetch", "origin", "--", sha], { cwd: options?.cwd });
|
||||
if (fetch.exitCode !== 0) return false;
|
||||
|
||||
// Verify it's now available
|
||||
const recheck = await runtime.runGit(["cat-file", "-t", sha], { cwd: options?.cwd });
|
||||
return recheck.exitCode === 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a git worktree.
|
||||
* Runs: `git worktree add [--detach] <path> <ref>`
|
||||
* Throws on failure with a descriptive error.
|
||||
*/
|
||||
export async function createWorktree(
|
||||
runtime: ReviewGitRuntime,
|
||||
options: CreateWorktreeOptions,
|
||||
): Promise<{ worktreePath: string }> {
|
||||
const args = ["worktree", "add"];
|
||||
if (options.detach) args.push("--detach");
|
||||
args.push(options.path, options.ref);
|
||||
|
||||
const result = await runtime.runGit(args, { cwd: options.cwd });
|
||||
if (result.exitCode !== 0) {
|
||||
throw new Error(`git worktree add failed: ${result.stderr.trim() || `exit code ${result.exitCode}`}`);
|
||||
}
|
||||
|
||||
return { worktreePath: options.path };
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove a git worktree. Best-effort — logs errors but does not throw.
|
||||
* Runs: `git worktree remove [--force] <path>`
|
||||
*/
|
||||
export async function removeWorktree(
|
||||
runtime: ReviewGitRuntime,
|
||||
worktreePath: string,
|
||||
options?: RemoveWorktreeOptions,
|
||||
): Promise<void> {
|
||||
const args = ["worktree", "remove"];
|
||||
if (options?.force) args.push("--force");
|
||||
args.push(worktreePath);
|
||||
|
||||
try {
|
||||
const result = await runtime.runGit(args, { cwd: options?.cwd });
|
||||
if (result.exitCode !== 0) {
|
||||
console.error(`Warning: git worktree remove failed for ${worktreePath}: ${result.stderr.trim()}`);
|
||||
}
|
||||
} catch (err) {
|
||||
console.error(`Warning: worktree cleanup error: ${err instanceof Error ? err.message : String(err)}`);
|
||||
}
|
||||
}
|
||||
1266
extensions/plannotator/index.ts
Normal file
1266
extensions/plannotator/index.ts
Normal file
File diff suppressed because it is too large
Load Diff
4457
extensions/plannotator/package-lock.json
generated
Normal file
4457
extensions/plannotator/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
57
extensions/plannotator/package.json
Normal file
57
extensions/plannotator/package.json
Normal file
@@ -0,0 +1,57 @@
|
||||
{
|
||||
"name": "@plannotator/pi-extension",
|
||||
"version": "0.19.10",
|
||||
"type": "module",
|
||||
"description": "Plannotator Pi extension - interactive plan review with annotations, annotate agent messages, and review code/PRs",
|
||||
"author": "backnotprop",
|
||||
"license": "MIT OR Apache-2.0",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/backnotprop/plannotator.git",
|
||||
"directory": "apps/pi-extension"
|
||||
},
|
||||
"homepage": "https://github.com/backnotprop/plannotator",
|
||||
"bugs": {
|
||||
"url": "https://github.com/backnotprop/plannotator/issues"
|
||||
},
|
||||
"keywords": ["pi-package", "plannotator", "plan-review", "ai-agent", "coding-agent"],
|
||||
"pi": {
|
||||
"extensions": ["./"],
|
||||
"skills": ["./skills"]
|
||||
},
|
||||
"files": [
|
||||
"index.ts",
|
||||
"assistant-message.ts",
|
||||
"current-pi-session.ts",
|
||||
"plannotator-browser.ts",
|
||||
"plannotator-events.ts",
|
||||
"server.ts",
|
||||
"tool-scope.ts",
|
||||
"config.ts",
|
||||
"plannotator.json",
|
||||
"server/",
|
||||
"generated/",
|
||||
"README.md",
|
||||
"plannotator.html",
|
||||
"review-editor.html",
|
||||
"skills/"
|
||||
],
|
||||
"scripts": {
|
||||
"build": "cp ../hook/dist/index.html plannotator.html && cp ../hook/dist/review.html review-editor.html && rm -rf skills && cp -r ../skills skills && bash vendor.sh",
|
||||
"prepublishOnly": "cd ../.. && bun run build:pi"
|
||||
},
|
||||
"dependencies": {
|
||||
"@pierre/diffs": "^1.1.12",
|
||||
"turndown": "^7.2.4",
|
||||
"@joplin/turndown-plugin-gfm": "^1.0.64"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"@mariozechner/pi-coding-agent": ">=0.53.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@mariozechner/pi-coding-agent": ">=0.53.0",
|
||||
"@mariozechner/pi-agent-core": ">=0.53.0",
|
||||
"@mariozechner/pi-ai": ">=0.53.0",
|
||||
"@mariozechner/pi-tui": ">=0.53.0"
|
||||
}
|
||||
}
|
||||
549
extensions/plannotator/plannotator-browser.ts
Normal file
549
extensions/plannotator/plannotator-browser.ts
Normal file
@@ -0,0 +1,549 @@
|
||||
import { existsSync, readFileSync, realpathSync, rmSync, statSync } from "node:fs";
|
||||
import { dirname, join, resolve } from "node:path";
|
||||
import { tmpdir } from "node:os";
|
||||
import { spawnSync } from "node:child_process";
|
||||
import { createWorktreePool, type WorktreePool } from "./generated/worktree-pool.js";
|
||||
import { fileURLToPath } from "node:url";
|
||||
import type { ExtensionContext } from "@mariozechner/pi-coding-agent";
|
||||
import {
|
||||
getGitContext,
|
||||
reviewRuntime,
|
||||
runGitDiff,
|
||||
startAnnotateServer,
|
||||
startPlanReviewServer,
|
||||
startReviewServer,
|
||||
type DiffType,
|
||||
} from "./server.js";
|
||||
import { openBrowser, isRemoteSession } from "./server/network.js";
|
||||
import { parsePRUrl, checkPRAuth, fetchPR } from "./server/pr.js";
|
||||
import {
|
||||
getMRLabel,
|
||||
getMRNumberLabel,
|
||||
getDisplayRepo,
|
||||
getCliName,
|
||||
getCliInstallUrl,
|
||||
} from "./generated/pr-provider.js";
|
||||
import { parseRemoteUrl } from "./generated/repo.js";
|
||||
import { fetchRef, createWorktree, removeWorktree, ensureObjectAvailable } from "./generated/worktree.js";
|
||||
import { loadConfig, resolveDefaultDiffType } from "./generated/config.js";
|
||||
export { getLastAssistantMessageText } from "./assistant-message.js";
|
||||
|
||||
export type AnnotateMode = "annotate" | "annotate-folder" | "annotate-last";
|
||||
export interface PlanReviewDecision {
|
||||
approved: boolean;
|
||||
feedback?: string;
|
||||
savedPath?: string;
|
||||
agentSwitch?: string;
|
||||
permissionMode?: string;
|
||||
}
|
||||
|
||||
export interface BrowserDecisionSession<T> {
|
||||
url: string;
|
||||
waitForDecision: () => Promise<T>;
|
||||
stop: () => void;
|
||||
}
|
||||
|
||||
export interface PlanReviewBrowserSession extends BrowserDecisionSession<PlanReviewDecision> {
|
||||
reviewId: string;
|
||||
onDecision: (listener: (result: PlanReviewDecision) => void | Promise<void>) => () => void;
|
||||
}
|
||||
|
||||
const __dirname = dirname(fileURLToPath(import.meta.url));
|
||||
let planHtmlContent = "";
|
||||
let reviewHtmlContent = "";
|
||||
|
||||
try {
|
||||
planHtmlContent = readFileSync(resolve(__dirname, "plannotator.html"), "utf-8");
|
||||
} catch {
|
||||
// built assets unavailable
|
||||
}
|
||||
|
||||
try {
|
||||
reviewHtmlContent = readFileSync(resolve(__dirname, "review-editor.html"), "utf-8");
|
||||
} catch {
|
||||
// built assets unavailable
|
||||
}
|
||||
|
||||
function delay(ms: number): Promise<void> {
|
||||
return new Promise((resolvePromise) => setTimeout(resolvePromise, ms));
|
||||
}
|
||||
|
||||
export function hasPlanBrowserHtml(): boolean {
|
||||
return Boolean(planHtmlContent);
|
||||
}
|
||||
|
||||
export function hasReviewBrowserHtml(): boolean {
|
||||
return Boolean(reviewHtmlContent);
|
||||
}
|
||||
|
||||
export function getStartupErrorMessage(err: unknown): string {
|
||||
return err instanceof Error ? err.message : "Unknown error";
|
||||
}
|
||||
|
||||
function openBrowserForServer(serverUrl: string, ctx: ExtensionContext): void {
|
||||
const browserResult = openBrowser(serverUrl);
|
||||
if (isRemoteSession()) {
|
||||
ctx.ui.notify(`[Plannotator] ${serverUrl}`, "info");
|
||||
} else if (!browserResult.opened) {
|
||||
ctx.ui.notify(`Open this URL to review: ${serverUrl}`, "info");
|
||||
}
|
||||
}
|
||||
|
||||
async function openBrowserAndWait<T>(
|
||||
server: { url: string; stop: () => void },
|
||||
ctx: ExtensionContext,
|
||||
waitForResult: () => Promise<T>,
|
||||
): Promise<T> {
|
||||
openBrowserForServer(server.url, ctx);
|
||||
return waitForDecisionWithCleanup(server, waitForResult);
|
||||
}
|
||||
|
||||
async function waitForDecisionWithCleanup<T>(
|
||||
server: { url: string; stop: () => void },
|
||||
waitForResult: () => Promise<T>,
|
||||
): Promise<T> {
|
||||
try {
|
||||
const result = await waitForResult();
|
||||
await delay(1500);
|
||||
return result;
|
||||
} finally {
|
||||
server.stop();
|
||||
}
|
||||
}
|
||||
|
||||
function startBrowserDecisionSession<T>(
|
||||
server: { url: string; stop: () => void },
|
||||
ctx: ExtensionContext,
|
||||
waitForResult: () => Promise<T>,
|
||||
): BrowserDecisionSession<T> {
|
||||
openBrowserForServer(server.url, ctx);
|
||||
let stopped = false;
|
||||
let stopReject: ((err: Error) => void) | undefined;
|
||||
let decisionPromise: Promise<T> | undefined;
|
||||
const createStoppedError = () => new Error("Plannotator browser session was stopped.");
|
||||
const stop = () => {
|
||||
if (stopped) return;
|
||||
stopped = true;
|
||||
server.stop();
|
||||
stopReject?.(createStoppedError());
|
||||
stopReject = undefined;
|
||||
};
|
||||
|
||||
return {
|
||||
url: server.url,
|
||||
waitForDecision: () => {
|
||||
if (decisionPromise) return decisionPromise;
|
||||
if (stopped) return Promise.reject(createStoppedError());
|
||||
decisionPromise = (async () => {
|
||||
const stoppedPromise = new Promise<never>((_, reject) => {
|
||||
stopReject = reject;
|
||||
});
|
||||
try {
|
||||
const result = await Promise.race([waitForResult(), stoppedPromise]);
|
||||
stopReject = undefined;
|
||||
await delay(1500);
|
||||
return result;
|
||||
} finally {
|
||||
stop();
|
||||
}
|
||||
})();
|
||||
return decisionPromise;
|
||||
},
|
||||
stop,
|
||||
};
|
||||
}
|
||||
|
||||
export async function startPlanReviewBrowserSession(
|
||||
ctx: ExtensionContext,
|
||||
planContent: string,
|
||||
): Promise<PlanReviewBrowserSession> {
|
||||
if (!ctx.hasUI || !planHtmlContent) {
|
||||
throw new Error("Plannotator browser review is unavailable in this session.");
|
||||
}
|
||||
|
||||
const server = await startPlanReviewServer({
|
||||
plan: planContent,
|
||||
htmlContent: planHtmlContent,
|
||||
origin: "pi",
|
||||
sharingEnabled: process.env.PLANNOTATOR_SHARE !== "disabled",
|
||||
shareBaseUrl: process.env.PLANNOTATOR_SHARE_URL || undefined,
|
||||
pasteApiUrl: process.env.PLANNOTATOR_PASTE_URL || undefined,
|
||||
});
|
||||
|
||||
const session = startBrowserDecisionSession(server, ctx, server.waitForDecision);
|
||||
server.onDecision(() => {
|
||||
setTimeout(() => session.stop(), 1500);
|
||||
});
|
||||
|
||||
return {
|
||||
...session,
|
||||
reviewId: server.reviewId,
|
||||
onDecision: server.onDecision,
|
||||
};
|
||||
}
|
||||
|
||||
export async function openPlanReviewBrowser(
|
||||
ctx: ExtensionContext,
|
||||
planContent: string,
|
||||
): Promise<PlanReviewDecision> {
|
||||
const session = await startPlanReviewBrowserSession(ctx, planContent);
|
||||
return session.waitForDecision();
|
||||
}
|
||||
|
||||
export async function openCodeReview(
|
||||
ctx: ExtensionContext,
|
||||
options: { cwd?: string; defaultBranch?: string; diffType?: DiffType; prUrl?: string } = {},
|
||||
): Promise<{ approved: boolean; feedback?: string; annotations?: unknown[]; agentSwitch?: string; exit?: boolean }> {
|
||||
const session = await startCodeReviewBrowserSession(ctx, options);
|
||||
return session.waitForDecision();
|
||||
}
|
||||
|
||||
export async function startCodeReviewBrowserSession(
|
||||
ctx: ExtensionContext,
|
||||
options: { cwd?: string; defaultBranch?: string; diffType?: DiffType; prUrl?: string } = {},
|
||||
): Promise<
|
||||
BrowserDecisionSession<{
|
||||
approved: boolean;
|
||||
feedback?: string;
|
||||
annotations?: unknown[];
|
||||
agentSwitch?: string;
|
||||
exit?: boolean;
|
||||
}>
|
||||
> {
|
||||
if (!ctx.hasUI || !reviewHtmlContent) {
|
||||
throw new Error("Plannotator code review browser is unavailable in this session.");
|
||||
}
|
||||
|
||||
const urlArg = options.prUrl;
|
||||
const isPRMode = urlArg?.startsWith("http://") || urlArg?.startsWith("https://");
|
||||
|
||||
let rawPatch: string;
|
||||
let gitRef: string;
|
||||
let diffError: string | undefined;
|
||||
let gitCtx: Awaited<ReturnType<typeof getGitContext>> | undefined;
|
||||
let prMetadata: Awaited<ReturnType<typeof fetchPR>>["metadata"] | undefined;
|
||||
let diffType: DiffType | undefined;
|
||||
let agentCwd: string | undefined;
|
||||
let initialBase: string | undefined;
|
||||
let worktreeCleanup: (() => void | Promise<void>) | undefined;
|
||||
let worktreePool: WorktreePool | undefined;
|
||||
let exitHandler: (() => void) | undefined;
|
||||
|
||||
if (isPRMode && urlArg) {
|
||||
// --- PR Review Mode ---
|
||||
const prRef = parsePRUrl(urlArg);
|
||||
if (!prRef) {
|
||||
throw new Error(
|
||||
`Invalid PR/MR URL: ${urlArg}\n` +
|
||||
"Supported formats:\n" +
|
||||
" GitHub: https://github.com/owner/repo/pull/123\n" +
|
||||
" GitLab: https://gitlab.com/group/project/-/merge_requests/42",
|
||||
);
|
||||
}
|
||||
|
||||
const cliName = getCliName(prRef);
|
||||
const cliUrl = getCliInstallUrl(prRef);
|
||||
|
||||
try {
|
||||
await checkPRAuth(prRef);
|
||||
} catch (err) {
|
||||
const msg = err instanceof Error ? err.message : String(err);
|
||||
if (msg.includes("not found") || msg.includes("ENOENT")) {
|
||||
throw new Error(`${cliName === "gh" ? "GitHub" : "GitLab"} CLI (${cliName}) is not installed. Install it from ${cliUrl}`);
|
||||
}
|
||||
throw err;
|
||||
}
|
||||
|
||||
console.error(`Fetching ${getMRLabel(prRef)} ${getMRNumberLabel(prRef)} from ${getDisplayRepo(prRef)}...`);
|
||||
const pr = await fetchPR(prRef);
|
||||
rawPatch = pr.rawPatch;
|
||||
gitRef = `${getMRLabel(prRef)} ${getMRNumberLabel(prRef)}`;
|
||||
prMetadata = pr.metadata;
|
||||
|
||||
// Create local worktree for agent file access (--local is the default for PR reviews)
|
||||
let localPath: string | undefined;
|
||||
let sessionDir: string | undefined;
|
||||
try {
|
||||
const repoDir = options.cwd ?? ctx.cwd;
|
||||
const identifier = prMetadata.platform === "github"
|
||||
? `${prMetadata.owner}-${prMetadata.repo}-${prMetadata.number}`
|
||||
: `${prMetadata.projectPath.replace(/\//g, "-")}-${prMetadata.iid}`;
|
||||
const suffix = Math.random().toString(36).slice(2, 8);
|
||||
const prNumber = prMetadata.platform === "github" ? prMetadata.number : prMetadata.iid;
|
||||
sessionDir = join(realpathSync(tmpdir()), `plannotator-pr-${identifier}-${suffix}`);
|
||||
localPath = join(sessionDir, "pool", `pr-${prNumber}`);
|
||||
const fetchRefStr = prMetadata.platform === "github"
|
||||
? `refs/pull/${prMetadata.number}/head`
|
||||
: `refs/merge-requests/${prMetadata.iid}/head`;
|
||||
|
||||
// Validate inputs from platform API to prevent git flag/path injection
|
||||
if (prMetadata.baseBranch.includes('..') || prMetadata.baseBranch.startsWith('-')) throw new Error(`Invalid base branch: ${prMetadata.baseBranch}`);
|
||||
if (!/^[0-9a-f]{40,64}$/i.test(prMetadata.baseSha)) throw new Error(`Invalid base SHA: ${prMetadata.baseSha}`);
|
||||
|
||||
// Detect same-repo vs cross-repo (must match both owner/repo AND host)
|
||||
let isSameRepo = false;
|
||||
try {
|
||||
const remoteResult = await reviewRuntime.runGit(["remote", "get-url", "origin"], { cwd: repoDir });
|
||||
if (remoteResult.exitCode === 0) {
|
||||
const remoteUrl = remoteResult.stdout.trim();
|
||||
const currentRepo = parseRemoteUrl(remoteUrl);
|
||||
const prRepo = prMetadata.platform === "github"
|
||||
? `${prMetadata.owner}/${prMetadata.repo}`
|
||||
: prMetadata.projectPath;
|
||||
const repoMatches = !!currentRepo && currentRepo.toLowerCase() === prRepo.toLowerCase();
|
||||
const sshHost = remoteUrl.match(/^[^@]+@([^:]+):/)?.[1];
|
||||
const httpsHost = (() => { try { return new URL(remoteUrl).hostname; } catch { return null; } })();
|
||||
const remoteHost = (sshHost || httpsHost || "").toLowerCase();
|
||||
const prHost = prMetadata.host.toLowerCase();
|
||||
isSameRepo = repoMatches && remoteHost === prHost;
|
||||
}
|
||||
} catch { /* not in a git repo — cross-repo path */ }
|
||||
|
||||
if (isSameRepo) {
|
||||
// ── Same-repo: fast worktree path ──
|
||||
console.error("Fetching PR branch and creating local worktree...");
|
||||
await fetchRef(reviewRuntime, prMetadata.baseBranch, { cwd: repoDir });
|
||||
await ensureObjectAvailable(reviewRuntime, prMetadata.baseSha, { cwd: repoDir });
|
||||
await fetchRef(reviewRuntime, fetchRefStr, { cwd: repoDir });
|
||||
|
||||
await createWorktree(reviewRuntime, {
|
||||
ref: "FETCH_HEAD",
|
||||
path: localPath,
|
||||
detach: true,
|
||||
cwd: repoDir,
|
||||
});
|
||||
|
||||
const wtRepoDir = repoDir;
|
||||
exitHandler = () => {
|
||||
try {
|
||||
for (const entry of worktreePool?.entries() ?? []) {
|
||||
spawnSync("git", ["worktree", "remove", "--force", entry.path], { cwd: wtRepoDir });
|
||||
}
|
||||
} catch {}
|
||||
if (sessionDir) try { rmSync(sessionDir, { recursive: true, force: true }); } catch {}
|
||||
};
|
||||
worktreeCleanup = async () => {
|
||||
if (exitHandler) { process.removeListener("exit", exitHandler); exitHandler = undefined; }
|
||||
if (worktreePool) await worktreePool.cleanup(reviewRuntime);
|
||||
if (sessionDir) try { rmSync(sessionDir, { recursive: true, force: true }); } catch {}
|
||||
};
|
||||
process.once("exit", exitHandler);
|
||||
} else {
|
||||
// ── Cross-repo: shallow clone + fetch PR head ──
|
||||
const prRepo = prMetadata.platform === "github"
|
||||
? `${prMetadata.owner}/${prMetadata.repo}`
|
||||
: prMetadata.projectPath;
|
||||
if (/^-/.test(prRepo)) throw new Error(`Invalid repository identifier: ${prRepo}`);
|
||||
const cli = prMetadata.platform === "github" ? "gh" : "glab";
|
||||
const host = prMetadata.host;
|
||||
// gh/glab repo clone doesn't accept --hostname; set GH_HOST/GITLAB_HOST env instead
|
||||
const isDefaultHost = host === "github.com" || host === "gitlab.com";
|
||||
const cloneEnv = isDefaultHost ? undefined : {
|
||||
...process.env,
|
||||
...(prMetadata.platform === "github" ? { GH_HOST: host } : { GITLAB_HOST: host }),
|
||||
};
|
||||
|
||||
console.error(`Cloning ${prRepo} (shallow)...`);
|
||||
const cloneResult = spawnSync(cli, ["repo", "clone", prRepo, localPath, "--", "--depth=1", "--no-checkout"], { encoding: "utf-8", env: cloneEnv });
|
||||
if ((cloneResult.status ?? 1) !== 0) {
|
||||
throw new Error(`${cli} repo clone failed: ${(cloneResult.stderr ?? "").trim()}`);
|
||||
}
|
||||
|
||||
console.error("Fetching PR branch...");
|
||||
const fetchResult = await reviewRuntime.runGit(["fetch", "--depth=200", "origin", fetchRefStr], { cwd: localPath });
|
||||
if (fetchResult.exitCode !== 0) throw new Error(`Failed to fetch PR head ref: ${fetchResult.stderr.trim()}`);
|
||||
|
||||
const checkoutResult = await reviewRuntime.runGit(["checkout", "FETCH_HEAD"], { cwd: localPath });
|
||||
if (checkoutResult.exitCode !== 0) {
|
||||
throw new Error(`git checkout FETCH_HEAD failed: ${checkoutResult.stderr.trim()}`);
|
||||
}
|
||||
|
||||
// Best-effort: create base refs so agent diffs work
|
||||
const baseFetch = await reviewRuntime.runGit(["fetch", "--depth=200", "origin", prMetadata.baseSha], { cwd: localPath });
|
||||
if (baseFetch.exitCode !== 0) console.error("Warning: failed to fetch baseSha, agent diffs may be inaccurate");
|
||||
await reviewRuntime.runGit(["branch", "--", prMetadata.baseBranch, prMetadata.baseSha], { cwd: localPath });
|
||||
await reviewRuntime.runGit(["update-ref", `refs/remotes/origin/${prMetadata.baseBranch}`, prMetadata.baseSha], { cwd: localPath });
|
||||
|
||||
exitHandler = () => {
|
||||
if (sessionDir) try { rmSync(sessionDir, { recursive: true, force: true }); } catch {}
|
||||
};
|
||||
worktreeCleanup = () => {
|
||||
if (exitHandler) { process.removeListener("exit", exitHandler); exitHandler = undefined; }
|
||||
if (sessionDir) try { rmSync(sessionDir, { recursive: true, force: true }); } catch {}
|
||||
};
|
||||
process.once("exit", exitHandler);
|
||||
}
|
||||
|
||||
agentCwd = localPath;
|
||||
worktreePool = createWorktreePool(
|
||||
{ sessionDir: sessionDir!, repoDir, isSameRepo },
|
||||
{ path: localPath, prUrl: prMetadata.url, number: prNumber, ready: true },
|
||||
);
|
||||
console.error(`Local checkout ready at ${localPath}`);
|
||||
} catch (err) {
|
||||
console.error("Warning: local worktree creation failed, falling back to remote diff");
|
||||
console.error(err instanceof Error ? err.message : String(err));
|
||||
if (exitHandler) { process.removeListener("exit", exitHandler); exitHandler = undefined; }
|
||||
if (sessionDir) try { rmSync(sessionDir, { recursive: true, force: true }); } catch {}
|
||||
agentCwd = undefined;
|
||||
worktreePool = undefined;
|
||||
worktreeCleanup = undefined;
|
||||
}
|
||||
} else {
|
||||
// --- Local Review Mode ---
|
||||
const cwd = options.cwd ?? ctx.cwd;
|
||||
gitCtx = await getGitContext(cwd);
|
||||
const defaultBranch = options.defaultBranch ?? gitCtx.defaultBranch;
|
||||
const config = loadConfig();
|
||||
diffType = options.diffType ?? resolveDefaultDiffType(config);
|
||||
const result = await runGitDiff(diffType, defaultBranch, cwd, {
|
||||
hideWhitespace: config.diffOptions?.hideWhitespace ?? false,
|
||||
});
|
||||
rawPatch = result.patch;
|
||||
gitRef = result.label;
|
||||
diffError = result.error;
|
||||
// Remember which base the initial diff was computed against so it can
|
||||
// be forwarded to the server below. Only matters when the caller
|
||||
// overrode the detected default; otherwise it matches gitCtx already.
|
||||
initialBase = defaultBranch;
|
||||
}
|
||||
|
||||
const server = await startReviewServer({
|
||||
rawPatch,
|
||||
gitRef,
|
||||
error: diffError,
|
||||
origin: "pi",
|
||||
diffType,
|
||||
gitContext: gitCtx,
|
||||
initialBase,
|
||||
prMetadata,
|
||||
agentCwd,
|
||||
worktreePool,
|
||||
htmlContent: reviewHtmlContent,
|
||||
sharingEnabled: process.env.PLANNOTATOR_SHARE !== "disabled",
|
||||
shareBaseUrl: process.env.PLANNOTATOR_SHARE_URL || undefined,
|
||||
pasteApiUrl: process.env.PLANNOTATOR_PASTE_URL || undefined,
|
||||
onCleanup: worktreeCleanup,
|
||||
});
|
||||
|
||||
return startBrowserDecisionSession(server, ctx, server.waitForDecision);
|
||||
}
|
||||
|
||||
export async function openMarkdownAnnotation(
|
||||
ctx: ExtensionContext,
|
||||
filePath: string,
|
||||
markdown: string,
|
||||
mode: AnnotateMode,
|
||||
folderPath?: string,
|
||||
sourceInfo?: string,
|
||||
sourceConverted?: boolean,
|
||||
gate?: boolean,
|
||||
): Promise<{ feedback: string; exit?: boolean; approved?: boolean }> {
|
||||
const session = await startMarkdownAnnotationSession(
|
||||
ctx,
|
||||
filePath,
|
||||
markdown,
|
||||
mode,
|
||||
folderPath,
|
||||
sourceInfo,
|
||||
sourceConverted,
|
||||
gate,
|
||||
);
|
||||
return session.waitForDecision();
|
||||
}
|
||||
|
||||
export async function startMarkdownAnnotationSession(
|
||||
ctx: ExtensionContext,
|
||||
filePath: string,
|
||||
markdown: string,
|
||||
mode: AnnotateMode,
|
||||
folderPath?: string,
|
||||
sourceInfo?: string,
|
||||
sourceConverted?: boolean,
|
||||
gate?: boolean,
|
||||
): Promise<BrowserDecisionSession<{ feedback: string; exit?: boolean; approved?: boolean }>> {
|
||||
if (!ctx.hasUI || !planHtmlContent) {
|
||||
throw new Error("Plannotator annotation browser is unavailable in this session.");
|
||||
}
|
||||
|
||||
let resolvedMarkdown = markdown;
|
||||
if (!resolvedMarkdown.trim() && existsSync(filePath)) {
|
||||
try {
|
||||
const fileStat = statSync(filePath);
|
||||
if (!fileStat.isDirectory()) {
|
||||
resolvedMarkdown = readFileSync(filePath, "utf-8");
|
||||
}
|
||||
} catch {
|
||||
// fall back to provided markdown
|
||||
}
|
||||
}
|
||||
|
||||
const server = await startAnnotateServer({
|
||||
markdown: resolvedMarkdown,
|
||||
filePath,
|
||||
origin: "pi",
|
||||
mode,
|
||||
folderPath,
|
||||
sourceInfo,
|
||||
sourceConverted,
|
||||
gate,
|
||||
htmlContent: planHtmlContent,
|
||||
sharingEnabled: process.env.PLANNOTATOR_SHARE !== "disabled",
|
||||
shareBaseUrl: process.env.PLANNOTATOR_SHARE_URL || undefined,
|
||||
pasteApiUrl: process.env.PLANNOTATOR_PASTE_URL || undefined,
|
||||
});
|
||||
|
||||
return startBrowserDecisionSession(server, ctx, server.waitForDecision);
|
||||
}
|
||||
|
||||
export async function openLastMessageAnnotation(
|
||||
ctx: ExtensionContext,
|
||||
lastText: string,
|
||||
gate?: boolean,
|
||||
): Promise<{ feedback: string; exit?: boolean; approved?: boolean }> {
|
||||
return openMarkdownAnnotation(ctx, "last-message", lastText, "annotate-last", undefined, undefined, undefined, gate);
|
||||
}
|
||||
|
||||
export async function startLastMessageAnnotationSession(
|
||||
ctx: ExtensionContext,
|
||||
lastText: string,
|
||||
gate?: boolean,
|
||||
): Promise<BrowserDecisionSession<{ feedback: string; exit?: boolean; approved?: boolean }>> {
|
||||
return startMarkdownAnnotationSession(
|
||||
ctx,
|
||||
"last-message",
|
||||
lastText,
|
||||
"annotate-last",
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
gate,
|
||||
);
|
||||
}
|
||||
|
||||
export async function openArchiveBrowserAction(
|
||||
ctx: ExtensionContext,
|
||||
customPlanPath?: string,
|
||||
): Promise<{ opened: boolean }> {
|
||||
if (!ctx.hasUI || !planHtmlContent) {
|
||||
throw new Error("Plannotator archive browser is unavailable in this session.");
|
||||
}
|
||||
|
||||
const server = await startPlanReviewServer({
|
||||
plan: "",
|
||||
htmlContent: planHtmlContent,
|
||||
origin: "pi",
|
||||
mode: "archive",
|
||||
customPlanPath,
|
||||
sharingEnabled: process.env.PLANNOTATOR_SHARE !== "disabled",
|
||||
shareBaseUrl: process.env.PLANNOTATOR_SHARE_URL || undefined,
|
||||
pasteApiUrl: process.env.PLANNOTATOR_PASTE_URL || undefined,
|
||||
});
|
||||
|
||||
return openBrowserAndWait(server, ctx, async () => {
|
||||
if (server.waitForDone) {
|
||||
await server.waitForDone();
|
||||
}
|
||||
return { opened: true };
|
||||
});
|
||||
}
|
||||
334
extensions/plannotator/plannotator-events.ts
Normal file
334
extensions/plannotator/plannotator-events.ts
Normal file
@@ -0,0 +1,334 @@
|
||||
import { existsSync, mkdirSync, readFileSync, writeFileSync } from "node:fs";
|
||||
import { homedir } from "node:os";
|
||||
import { dirname, join } from "node:path";
|
||||
import type { ExtensionAPI, ExtensionContext } from "@mariozechner/pi-coding-agent";
|
||||
import type { DiffType } from "./server.js";
|
||||
import {
|
||||
getLastAssistantMessageText,
|
||||
getStartupErrorMessage,
|
||||
openArchiveBrowserAction,
|
||||
openCodeReview,
|
||||
openLastMessageAnnotation,
|
||||
openMarkdownAnnotation,
|
||||
startCodeReviewBrowserSession,
|
||||
startLastMessageAnnotationSession,
|
||||
startMarkdownAnnotationSession,
|
||||
startPlanReviewBrowserSession,
|
||||
} from "./plannotator-browser.js";
|
||||
|
||||
export const PLANNOTATOR_REQUEST_CHANNEL = "plannotator:request" as const;
|
||||
export const PLANNOTATOR_REVIEW_RESULT_CHANNEL = "plannotator:review-result" as const;
|
||||
export const PLANNOTATOR_TIMEOUT_MS = 5_000;
|
||||
|
||||
export type PlannotatorAction =
|
||||
| "plan-review"
|
||||
| "review-status"
|
||||
| "code-review"
|
||||
| "annotate"
|
||||
| "annotate-last"
|
||||
| "archive";
|
||||
|
||||
export interface PlannotatorHandledResponse<T> {
|
||||
status: "handled";
|
||||
result: T;
|
||||
}
|
||||
|
||||
export interface PlannotatorUnavailableResponse {
|
||||
status: "unavailable";
|
||||
error?: string;
|
||||
}
|
||||
|
||||
export interface PlannotatorErrorResponse {
|
||||
status: "error";
|
||||
error: string;
|
||||
}
|
||||
|
||||
export type PlannotatorResponse<T> =
|
||||
| PlannotatorHandledResponse<T>
|
||||
| PlannotatorUnavailableResponse
|
||||
| PlannotatorErrorResponse;
|
||||
|
||||
export interface PlannotatorRequestBase<A extends PlannotatorAction, P, R> {
|
||||
requestId: string;
|
||||
action: A;
|
||||
payload: P;
|
||||
respond: (response: PlannotatorResponse<R>) => void;
|
||||
}
|
||||
|
||||
export interface PlannotatorPlanReviewPayload {
|
||||
planFilePath?: string;
|
||||
planContent: string;
|
||||
origin?: string;
|
||||
}
|
||||
|
||||
export interface PlannotatorPlanReviewStartResult {
|
||||
status: "pending";
|
||||
reviewId: string;
|
||||
}
|
||||
|
||||
export interface PlannotatorReviewResultEvent {
|
||||
reviewId: string;
|
||||
approved: boolean;
|
||||
feedback?: string;
|
||||
savedPath?: string;
|
||||
agentSwitch?: string;
|
||||
permissionMode?: string;
|
||||
}
|
||||
|
||||
export interface PlannotatorReviewStatusPayload {
|
||||
reviewId: string;
|
||||
}
|
||||
|
||||
export type PlannotatorReviewStatusResult =
|
||||
| { status: "pending" }
|
||||
| ({ status: "completed" } & PlannotatorReviewResultEvent)
|
||||
| { status: "missing" };
|
||||
|
||||
export interface PlannotatorCodeReviewPayload {
|
||||
diffType?: DiffType;
|
||||
defaultBranch?: string;
|
||||
cwd?: string;
|
||||
prUrl?: string;
|
||||
}
|
||||
|
||||
export interface PlannotatorCodeReviewResult {
|
||||
approved: boolean;
|
||||
feedback?: string;
|
||||
annotations?: unknown[];
|
||||
agentSwitch?: string;
|
||||
}
|
||||
|
||||
export interface PlannotatorAnnotatePayload {
|
||||
filePath: string;
|
||||
markdown?: string;
|
||||
mode?: "annotate" | "annotate-folder" | "annotate-last";
|
||||
folderPath?: string;
|
||||
/** Enable review-gate UX (Approve / Annotate / Close), #570 */
|
||||
gate?: boolean;
|
||||
}
|
||||
|
||||
export interface PlannotatorAnnotationResult {
|
||||
feedback: string;
|
||||
/** True when the reviewer closed the session without providing feedback. */
|
||||
exit?: boolean;
|
||||
/** True when the reviewer clicked Approve in review-gate mode, #570 */
|
||||
approved?: boolean;
|
||||
}
|
||||
|
||||
export interface PlannotatorArchivePayload {
|
||||
customPlanPath?: string;
|
||||
}
|
||||
|
||||
export interface PlannotatorArchiveResult {
|
||||
opened: boolean;
|
||||
}
|
||||
|
||||
export type PlannotatorRequestMap = {
|
||||
"plan-review": PlannotatorRequestBase<"plan-review", PlannotatorPlanReviewPayload, PlannotatorPlanReviewStartResult>;
|
||||
"review-status": PlannotatorRequestBase<"review-status", PlannotatorReviewStatusPayload, PlannotatorReviewStatusResult>;
|
||||
"code-review": PlannotatorRequestBase<"code-review", PlannotatorCodeReviewPayload, PlannotatorCodeReviewResult>;
|
||||
annotate: PlannotatorRequestBase<"annotate", PlannotatorAnnotatePayload, PlannotatorAnnotationResult>;
|
||||
"annotate-last": PlannotatorRequestBase<"annotate-last", PlannotatorAnnotatePayload, PlannotatorAnnotationResult>;
|
||||
archive: PlannotatorRequestBase<"archive", PlannotatorArchivePayload, PlannotatorArchiveResult>;
|
||||
};
|
||||
export type PlannotatorRequest = PlannotatorRequestMap[PlannotatorAction];
|
||||
export type PlannotatorResponseMap = {
|
||||
"plan-review": PlannotatorResponse<PlannotatorPlanReviewStartResult>;
|
||||
"review-status": PlannotatorResponse<PlannotatorReviewStatusResult>;
|
||||
"code-review": PlannotatorResponse<PlannotatorCodeReviewResult>;
|
||||
annotate: PlannotatorResponse<PlannotatorAnnotationResult>;
|
||||
"annotate-last": PlannotatorResponse<PlannotatorAnnotationResult>;
|
||||
archive: PlannotatorResponse<PlannotatorArchiveResult>;
|
||||
};
|
||||
function isPlannotatorAction(value: unknown): value is PlannotatorAction {
|
||||
return (
|
||||
value === "plan-review" ||
|
||||
value === "review-status" ||
|
||||
value === "code-review" ||
|
||||
value === "annotate" ||
|
||||
value === "annotate-last" ||
|
||||
value === "archive"
|
||||
);
|
||||
}
|
||||
|
||||
const REVIEW_STATUS_PATH = join(homedir(), ".pi", "plannotator-review-status.json");
|
||||
|
||||
type StoredReviewStatus = Record<string, PlannotatorReviewStatusResult>;
|
||||
|
||||
function readStoredReviewStatuses(): StoredReviewStatus {
|
||||
try {
|
||||
if (!existsSync(REVIEW_STATUS_PATH)) return {};
|
||||
const raw = readFileSync(REVIEW_STATUS_PATH, "utf-8");
|
||||
const parsed = JSON.parse(raw) as StoredReviewStatus;
|
||||
return parsed && typeof parsed === "object" ? parsed : {};
|
||||
} catch {
|
||||
return {};
|
||||
}
|
||||
}
|
||||
|
||||
function writeStoredReviewStatuses(statuses: StoredReviewStatus): void {
|
||||
mkdirSync(dirname(REVIEW_STATUS_PATH), { recursive: true });
|
||||
writeFileSync(REVIEW_STATUS_PATH, JSON.stringify(statuses, null, 2));
|
||||
}
|
||||
|
||||
function setStoredReviewStatus(reviewId: string, status: PlannotatorReviewStatusResult): void {
|
||||
const statuses = readStoredReviewStatuses();
|
||||
statuses[reviewId] = status;
|
||||
writeStoredReviewStatuses(statuses);
|
||||
}
|
||||
|
||||
function getStoredReviewStatus(reviewId: string): PlannotatorReviewStatusResult {
|
||||
return readStoredReviewStatuses()[reviewId] ?? { status: "missing" };
|
||||
}
|
||||
|
||||
function createActiveSessionContext() {
|
||||
let currentCtx: ExtensionContext | undefined;
|
||||
|
||||
return {
|
||||
set(ctx: ExtensionContext): void {
|
||||
currentCtx = ctx;
|
||||
},
|
||||
clear(): void {
|
||||
currentCtx = undefined;
|
||||
},
|
||||
get(): ExtensionContext | undefined {
|
||||
return currentCtx;
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
export function registerPlannotatorEventListeners(pi: ExtensionAPI): void {
|
||||
const activeSessionContext = createActiveSessionContext();
|
||||
|
||||
// Plannotator event requests are handled against the latest active session.
|
||||
// The active context is intentionally session-scoped and replaced on each session_start.
|
||||
pi.on("session_start", async (_event, ctx) => {
|
||||
activeSessionContext.set(ctx);
|
||||
});
|
||||
pi.events.on(PLANNOTATOR_REQUEST_CHANNEL, async (data) => {
|
||||
const request = data as Partial<PlannotatorRequest> | null;
|
||||
const ctx = activeSessionContext.get();
|
||||
|
||||
if (!request || typeof request.respond !== "function" || !isPlannotatorAction(request.action)) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
if (request.action === "review-status") {
|
||||
const reviewId = request.payload?.reviewId;
|
||||
if (typeof reviewId !== "string" || !reviewId.trim()) {
|
||||
request.respond({ status: "error", error: "Missing reviewId for review-status request." });
|
||||
return;
|
||||
}
|
||||
request.respond({ status: "handled", result: getStoredReviewStatus(reviewId) });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!ctx) {
|
||||
request.respond({ status: "unavailable", error: "Plannotator context is not ready yet." });
|
||||
return;
|
||||
}
|
||||
|
||||
switch (request.action) {
|
||||
case "plan-review": {
|
||||
const planContent = request.payload?.planContent;
|
||||
if (typeof planContent !== "string" || !planContent.trim()) {
|
||||
request.respond({ status: "error", error: "Missing planContent for plan-review request." });
|
||||
return;
|
||||
}
|
||||
const session = await startPlanReviewBrowserSession(ctx, planContent);
|
||||
setStoredReviewStatus(session.reviewId, { status: "pending" });
|
||||
session.onDecision((result) => {
|
||||
const reviewResult = {
|
||||
reviewId: session.reviewId,
|
||||
approved: result.approved,
|
||||
feedback: result.feedback,
|
||||
savedPath: result.savedPath,
|
||||
agentSwitch: result.agentSwitch,
|
||||
permissionMode: result.permissionMode,
|
||||
} satisfies PlannotatorReviewResultEvent;
|
||||
setStoredReviewStatus(session.reviewId, { status: "completed", ...reviewResult });
|
||||
pi.events.emit(PLANNOTATOR_REVIEW_RESULT_CHANNEL, reviewResult);
|
||||
});
|
||||
request.respond({
|
||||
status: "handled",
|
||||
result: {
|
||||
status: "pending",
|
||||
reviewId: session.reviewId,
|
||||
},
|
||||
});
|
||||
return;
|
||||
}
|
||||
case "code-review": {
|
||||
const result = await openCodeReview(ctx, {
|
||||
cwd: request.payload?.cwd,
|
||||
defaultBranch: request.payload?.defaultBranch,
|
||||
diffType: request.payload?.diffType,
|
||||
prUrl: request.payload?.prUrl,
|
||||
});
|
||||
request.respond({ status: "handled", result });
|
||||
return;
|
||||
}
|
||||
case "annotate": {
|
||||
const payload = request.payload;
|
||||
if (!payload?.filePath) {
|
||||
request.respond({ status: "error", error: "Missing filePath for annotate request." });
|
||||
return;
|
||||
}
|
||||
const sourceConverted = /\.html?$/i.test(payload.filePath) || /^https?:\/\//i.test(payload.filePath);
|
||||
const result = await openMarkdownAnnotation(
|
||||
ctx,
|
||||
payload.filePath,
|
||||
payload.markdown ?? "",
|
||||
payload.mode ?? "annotate",
|
||||
payload.folderPath,
|
||||
undefined,
|
||||
sourceConverted,
|
||||
payload.gate,
|
||||
);
|
||||
request.respond({ status: "handled", result });
|
||||
return;
|
||||
}
|
||||
case "annotate-last": {
|
||||
const payload = request.payload;
|
||||
const lastText = payload?.markdown?.trim() ? payload.markdown : getLastAssistantMessageText(ctx);
|
||||
if (!lastText) {
|
||||
request.respond({ status: "unavailable", error: "No assistant message found in session." });
|
||||
return;
|
||||
}
|
||||
const result = await openLastMessageAnnotation(ctx, lastText, payload?.gate);
|
||||
request.respond({ status: "handled", result });
|
||||
return;
|
||||
}
|
||||
case "archive": {
|
||||
const result = await openArchiveBrowserAction(ctx, request.payload?.customPlanPath);
|
||||
request.respond({ status: "handled", result });
|
||||
return;
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
const message = getStartupErrorMessage(err);
|
||||
if (/unavailable|not available/i.test(message)) {
|
||||
request.respond({ status: "unavailable", error: message });
|
||||
return;
|
||||
}
|
||||
request.respond({ status: "error", error: message });
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
export {
|
||||
getLastAssistantMessageText,
|
||||
hasPlanBrowserHtml,
|
||||
hasReviewBrowserHtml,
|
||||
startCodeReviewBrowserSession,
|
||||
startLastMessageAnnotationSession,
|
||||
startMarkdownAnnotationSession,
|
||||
getStartupErrorMessage,
|
||||
openArchiveBrowserAction,
|
||||
openCodeReview,
|
||||
openLastMessageAnnotation,
|
||||
openMarkdownAnnotation,
|
||||
openPlanReviewBrowser,
|
||||
startPlanReviewBrowserSession,
|
||||
} from "./plannotator-browser.js";
|
||||
3983
extensions/plannotator/plannotator.html
Normal file
3983
extensions/plannotator/plannotator.html
Normal file
File diff suppressed because one or more lines are too long
12
extensions/plannotator/plannotator.json
Normal file
12
extensions/plannotator/plannotator.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"phases": {
|
||||
"planning": {
|
||||
"activeTools": ["grep", "find", "ls", "plannotator_submit_plan"],
|
||||
"statusLabel": "⏸ plan",
|
||||
"systemPrompt": "[PLANNOTATOR - PLANNING PHASE]\nYou are in plan mode. You MUST NOT make any changes to the codebase — no edits, no commits, no installs, no destructive commands. During planning you may only write or edit markdown files (.md, .mdx) inside the working directory.\n\nAvailable tools: read, bash, grep, find, ls, write (markdown only), edit (markdown only), plannotator_submit_plan\n\nDo not run destructive bash commands (rm, git push, npm install, etc.) — focus on reading and exploring the codebase. Web fetching (curl, wget) is fine.\n\n## Iterative Planning Workflow\n\nYou are pair-planning with the user. Explore the code to build context, then write your findings into a markdown plan file as you go. The plan starts as a rough skeleton and gradually becomes the final plan.\n\n### Picking a plan file\n\nChoose a descriptive filename for your plan. Convention: `PLAN.md` at the repo root for a single focused plan, or `plans/<short-name>.md` for projects that keep multiple plans. Reuse the same filename across revisions of the same plan so version history links up.\n\n### The Loop\n\nRepeat this cycle until the plan is complete:\n\n1. **Explore** — Use read, grep, find, ls, and bash to understand the codebase. Actively search for existing functions, utilities, and patterns that can be reused — avoid proposing new code when suitable implementations already exist.\n2. **Update the plan file** — After each discovery, immediately capture what you learned in the plan. Don't wait until the end. Use write for the initial draft, then edit for all subsequent updates.\n3. **Ask the user** — When you hit an ambiguity or decision you can't resolve from code alone, ask. Then go back to step 1.\n\n### First Turn\n\nStart by quickly scanning key files to form an initial understanding of the task scope. Then write a skeleton plan (headers and rough notes) and ask the user your first round of questions. Don't explore exhaustively before engaging the user.\n\n### Asking Good Questions\n\n- Never ask what you could find out by reading the code.\n- Batch related questions together.\n- Focus on things only the user can answer: requirements, preferences, tradeoffs, edge-case priorities.\n- Scale depth to the task — a vague feature request needs many rounds; a focused bug fix may need one or none.\n\n### Plan File Structure\n\nYour plan file should use markdown with clear sections:\n- **Context** — Why this change is being made: the problem, what prompted it, the intended outcome.\n- **Approach** — Your recommended approach only, not all alternatives considered.\n- **Files to modify** — List the critical file paths that will be changed.\n- **Reuse** — Reference existing functions and utilities you found, with their file paths.\n- **Steps** — Implementation checklist:\n - [ ] Step 1 description\n - [ ] Step 2 description\n- **Verification** — How to test the changes end-to-end (run the code, run tests, manual checks).\n\nKeep the plan concise enough to scan quickly, but detailed enough to execute effectively.\n\n### When to Submit\n\nYour plan is ready when you've addressed all ambiguities and it covers: what to change, which files to modify, what existing code to reuse, and how to verify. Call plannotator_submit_plan with the path to your plan file to submit for review.\n\n### Revising After Feedback\n\nWhen the user denies a plan with feedback:\n1. Read the plan file to see the current plan.\n2. Use the edit tool to make targeted changes addressing the feedback — do NOT rewrite the entire file.\n3. Call plannotator_submit_plan again with the same filePath to resubmit.\n\n### Ending Your Turn\n\nYour turn should only end by either:\n- Asking the user a question to gather more information.\n- Calling plannotator_submit_plan when the plan is ready for review.\n\nDo not end your turn without doing one of these two things."
|
||||
},
|
||||
"executing": {
|
||||
"systemPrompt": "[PLANNOTATOR - EXECUTING PLAN]\nFull tool access is enabled. Execute the plan from ${planFilePath}.\n\nRemaining steps:\n${todoList}\n\nExecute each step in order. After completing a step, include [DONE:n] in your response where n is the step number."
|
||||
}
|
||||
}
|
||||
}
|
||||
4961
extensions/plannotator/review-editor.html
Normal file
4961
extensions/plannotator/review-editor.html
Normal file
File diff suppressed because one or more lines are too long
443
extensions/plannotator/server.test.ts
Normal file
443
extensions/plannotator/server.test.ts
Normal file
@@ -0,0 +1,443 @@
|
||||
import { afterEach, describe, expect, test } from "bun:test";
|
||||
import { spawnSync } from "node:child_process";
|
||||
import { mkdtempSync, rmSync, writeFileSync } from "node:fs";
|
||||
import { createServer as createNetServer } from "node:net";
|
||||
import { tmpdir } from "node:os";
|
||||
import { join } from "node:path";
|
||||
import { getGitContext, runGitDiff, startReviewServer } from "./server";
|
||||
|
||||
const tempDirs: string[] = [];
|
||||
const originalCwd = process.cwd();
|
||||
const originalHome = process.env.HOME;
|
||||
const originalPort = process.env.PLANNOTATOR_PORT;
|
||||
|
||||
function makeTempDir(prefix: string): string {
|
||||
const dir = mkdtempSync(join(tmpdir(), prefix));
|
||||
tempDirs.push(dir);
|
||||
return dir;
|
||||
}
|
||||
|
||||
function git(cwd: string, args: string[]): string {
|
||||
const result = spawnSync("git", args, { cwd, encoding: "utf-8" });
|
||||
if (result.status !== 0) {
|
||||
throw new Error(result.stderr || `git ${args.join(" ")} failed`);
|
||||
}
|
||||
return result.stdout.trim();
|
||||
}
|
||||
|
||||
function initRepo(): string {
|
||||
const repoDir = makeTempDir("plannotator-pi-review-");
|
||||
git(repoDir, ["init"]);
|
||||
git(repoDir, ["branch", "-M", "main"]);
|
||||
git(repoDir, ["config", "user.email", "pi-review@example.com"]);
|
||||
git(repoDir, ["config", "user.name", "Pi Review"]);
|
||||
|
||||
writeFileSync(join(repoDir, "tracked.txt"), "before\n", "utf-8");
|
||||
git(repoDir, ["add", "tracked.txt"]);
|
||||
git(repoDir, ["commit", "-m", "initial"]);
|
||||
|
||||
return repoDir;
|
||||
}
|
||||
|
||||
function reservePort(): Promise<number> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const server = createNetServer();
|
||||
server.once("error", reject);
|
||||
server.listen(0, "127.0.0.1", () => {
|
||||
const address = server.address();
|
||||
if (!address || typeof address === "string") {
|
||||
server.close();
|
||||
reject(new Error("Failed to reserve test port"));
|
||||
return;
|
||||
}
|
||||
|
||||
const { port } = address;
|
||||
server.close((error) => {
|
||||
if (error) {
|
||||
reject(error);
|
||||
return;
|
||||
}
|
||||
resolve(port);
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
afterEach(() => {
|
||||
process.chdir(originalCwd);
|
||||
if (originalHome === undefined) {
|
||||
delete process.env.HOME;
|
||||
} else {
|
||||
process.env.HOME = originalHome;
|
||||
}
|
||||
if (originalPort === undefined) {
|
||||
delete process.env.PLANNOTATOR_PORT;
|
||||
} else {
|
||||
process.env.PLANNOTATOR_PORT = originalPort;
|
||||
}
|
||||
|
||||
for (const dir of tempDirs.splice(0)) {
|
||||
rmSync(dir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
describe("pi review server", () => {
|
||||
test("serves review diff parity endpoints including drafts, uploads, and editor annotations", async () => {
|
||||
const homeDir = makeTempDir("plannotator-pi-home-");
|
||||
const repoDir = initRepo();
|
||||
process.env.HOME = homeDir;
|
||||
process.chdir(repoDir);
|
||||
process.env.PLANNOTATOR_PORT = String(await reservePort());
|
||||
|
||||
writeFileSync(join(repoDir, "tracked.txt"), "after\n", "utf-8");
|
||||
writeFileSync(join(repoDir, "untracked.txt"), "brand new\n", "utf-8");
|
||||
|
||||
const gitContext = await getGitContext();
|
||||
const diff = await runGitDiff("uncommitted", gitContext.defaultBranch);
|
||||
|
||||
const server = await startReviewServer({
|
||||
rawPatch: diff.patch,
|
||||
gitRef: diff.label,
|
||||
error: diff.error,
|
||||
diffType: "uncommitted",
|
||||
gitContext,
|
||||
origin: "pi",
|
||||
htmlContent: "<!doctype html><html><body>review</body></html>",
|
||||
});
|
||||
|
||||
try {
|
||||
const diffResponse = await fetch(`${server.url}/api/diff`);
|
||||
expect(diffResponse.status).toBe(200);
|
||||
const diffPayload = await diffResponse.json() as {
|
||||
rawPatch: string;
|
||||
gitContext?: { diffOptions: Array<{ id: string }> };
|
||||
origin?: string;
|
||||
repoInfo?: { display: string };
|
||||
};
|
||||
expect(diffPayload.origin).toBe("pi");
|
||||
expect(diffPayload.rawPatch).toContain("diff --git a/untracked.txt b/untracked.txt");
|
||||
expect(diffPayload.gitContext?.diffOptions.map((option) => option.id)).toEqual(
|
||||
expect.arrayContaining(["uncommitted", "staged", "unstaged", "last-commit"]),
|
||||
);
|
||||
expect(diffPayload.repoInfo?.display).toBeTruthy();
|
||||
|
||||
const fileContentResponse = await fetch(`${server.url}/api/file-content?path=tracked.txt`);
|
||||
const fileContent = await fileContentResponse.json() as {
|
||||
oldContent: string | null;
|
||||
newContent: string | null;
|
||||
};
|
||||
expect(fileContent.oldContent).toBe("before\n");
|
||||
expect(fileContent.newContent).toBe("after\n");
|
||||
|
||||
const draftBody = { annotations: [{ id: "draft-1" }] };
|
||||
const draftSave = await fetch(`${server.url}/api/draft`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify(draftBody),
|
||||
});
|
||||
expect(draftSave.status).toBe(200);
|
||||
|
||||
const draftLoad = await fetch(`${server.url}/api/draft`);
|
||||
expect(draftLoad.status).toBe(200);
|
||||
expect(await draftLoad.json()).toEqual(draftBody);
|
||||
|
||||
const annotationCreate = await fetch(`${server.url}/api/editor-annotation`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({
|
||||
filePath: "tracked.txt",
|
||||
selectedText: "after",
|
||||
lineStart: 1,
|
||||
lineEnd: 1,
|
||||
comment: "Check wording",
|
||||
}),
|
||||
});
|
||||
expect(annotationCreate.status).toBe(200);
|
||||
const createdAnnotation = await annotationCreate.json() as { id: string };
|
||||
expect(createdAnnotation.id).toBeTruthy();
|
||||
|
||||
const annotationsList = await fetch(`${server.url}/api/editor-annotations`);
|
||||
const annotationsPayload = await annotationsList.json() as { annotations: Array<{ id: string }> };
|
||||
expect(annotationsPayload.annotations).toHaveLength(1);
|
||||
expect(annotationsPayload.annotations[0].id).toBe(createdAnnotation.id);
|
||||
|
||||
const annotationDelete = await fetch(
|
||||
`${server.url}/api/editor-annotation?id=${encodeURIComponent(createdAnnotation.id)}`,
|
||||
{ method: "DELETE" },
|
||||
);
|
||||
expect(annotationDelete.status).toBe(200);
|
||||
|
||||
const agentsResponse = await fetch(`${server.url}/api/agents`);
|
||||
expect(await agentsResponse.json()).toEqual({ agents: [] });
|
||||
|
||||
const formData = new FormData();
|
||||
formData.append("file", new File(["png-bytes"], "diagram.png", { type: "image/png" }));
|
||||
const uploadResponse = await fetch(`${server.url}/api/upload`, {
|
||||
method: "POST",
|
||||
body: formData,
|
||||
});
|
||||
expect(uploadResponse.status).toBe(200);
|
||||
const uploadPayload = await uploadResponse.json() as { path: string; originalName: string };
|
||||
expect(uploadPayload.originalName).toBe("diagram.png");
|
||||
|
||||
const imageResponse = await fetch(
|
||||
`${server.url}/api/image?path=${encodeURIComponent(uploadPayload.path)}`,
|
||||
);
|
||||
expect(imageResponse.status).toBe(200);
|
||||
expect(await imageResponse.text()).toBe("png-bytes");
|
||||
|
||||
const draftDelete = await fetch(`${server.url}/api/draft`, { method: "DELETE" });
|
||||
expect(draftDelete.status).toBe(200);
|
||||
|
||||
const draftMissing = await fetch(`${server.url}/api/draft`);
|
||||
expect(draftMissing.status).toBe(404);
|
||||
|
||||
const feedbackResponse = await fetch(`${server.url}/api/feedback`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({
|
||||
approved: false,
|
||||
feedback: "Please update the diff",
|
||||
annotations: [{ id: "note-1" }],
|
||||
}),
|
||||
});
|
||||
expect(feedbackResponse.status).toBe(200);
|
||||
|
||||
await expect(server.waitForDecision()).resolves.toEqual({
|
||||
approved: false,
|
||||
feedback: "Please update the diff",
|
||||
annotations: [{ id: "note-1" }],
|
||||
agentSwitch: undefined,
|
||||
});
|
||||
} finally {
|
||||
server.stop();
|
||||
}
|
||||
});
|
||||
|
||||
test("exit endpoint resolves decision with exit flag", async () => {
|
||||
const homeDir = makeTempDir("plannotator-pi-home-");
|
||||
const repoDir = initRepo();
|
||||
process.env.HOME = homeDir;
|
||||
process.chdir(repoDir);
|
||||
process.env.PLANNOTATOR_PORT = String(await reservePort());
|
||||
|
||||
const gitContext = await getGitContext();
|
||||
const diff = await runGitDiff("uncommitted", gitContext.defaultBranch);
|
||||
|
||||
const server = await startReviewServer({
|
||||
rawPatch: diff.patch,
|
||||
gitRef: diff.label,
|
||||
error: diff.error,
|
||||
diffType: "uncommitted",
|
||||
gitContext,
|
||||
origin: "pi",
|
||||
htmlContent: "<!doctype html><html><body>review</body></html>",
|
||||
});
|
||||
|
||||
try {
|
||||
const exitResponse = await fetch(`${server.url}/api/exit`, { method: "POST" });
|
||||
expect(exitResponse.status).toBe(200);
|
||||
expect(await exitResponse.json()).toEqual({ ok: true });
|
||||
|
||||
await expect(server.waitForDecision()).resolves.toEqual({
|
||||
exit: true,
|
||||
approved: false,
|
||||
feedback: "",
|
||||
annotations: [],
|
||||
agentSwitch: undefined,
|
||||
});
|
||||
} finally {
|
||||
server.stop();
|
||||
}
|
||||
});
|
||||
|
||||
test("git-add endpoint stages and unstages files in review mode", async () => {
|
||||
const homeDir = makeTempDir("plannotator-pi-home-");
|
||||
const repoDir = initRepo();
|
||||
process.env.HOME = homeDir;
|
||||
process.chdir(repoDir);
|
||||
process.env.PLANNOTATOR_PORT = String(await reservePort());
|
||||
|
||||
writeFileSync(join(repoDir, "stage-me.txt"), "new file\n", "utf-8");
|
||||
|
||||
const gitContext = await getGitContext();
|
||||
const diff = await runGitDiff("uncommitted", gitContext.defaultBranch);
|
||||
|
||||
const server = await startReviewServer({
|
||||
rawPatch: diff.patch,
|
||||
gitRef: diff.label,
|
||||
error: diff.error,
|
||||
diffType: "uncommitted",
|
||||
gitContext,
|
||||
origin: "pi",
|
||||
htmlContent: "<!doctype html><html><body>review</body></html>",
|
||||
});
|
||||
|
||||
try {
|
||||
const stageResponse = await fetch(`${server.url}/api/git-add`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({ filePath: "stage-me.txt" }),
|
||||
});
|
||||
expect(stageResponse.status).toBe(200);
|
||||
expect(git(repoDir, ["diff", "--staged", "--name-only"])).toContain("stage-me.txt");
|
||||
|
||||
const unstageResponse = await fetch(`${server.url}/api/git-add`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({ filePath: "stage-me.txt", undo: true }),
|
||||
});
|
||||
expect(unstageResponse.status).toBe(200);
|
||||
expect(git(repoDir, ["diff", "--staged", "--name-only"])).not.toContain("stage-me.txt");
|
||||
expect(git(repoDir, ["status", "--short"])).toContain("?? stage-me.txt");
|
||||
|
||||
await fetch(`${server.url}/api/feedback`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({
|
||||
approved: true,
|
||||
feedback: "LGTM - no changes requested.",
|
||||
annotations: [],
|
||||
}),
|
||||
});
|
||||
await server.waitForDecision();
|
||||
} finally {
|
||||
server.stop();
|
||||
}
|
||||
}, 15_000);
|
||||
|
||||
test("round-trips the active base branch through /api/diff and /api/diff/switch", async () => {
|
||||
const homeDir = makeTempDir("plannotator-pi-home-");
|
||||
const repoDir = initRepo();
|
||||
process.env.HOME = homeDir;
|
||||
process.chdir(repoDir);
|
||||
process.env.PLANNOTATOR_PORT = String(await reservePort());
|
||||
|
||||
// Create a second branch the picker can switch to, then branch off it so
|
||||
// currentBranch !== defaultBranch and the branch/merge-base options appear.
|
||||
git(repoDir, ["checkout", "-b", "develop"]);
|
||||
writeFileSync(join(repoDir, "develop-file.txt"), "develop\n", "utf-8");
|
||||
git(repoDir, ["add", "develop-file.txt"]);
|
||||
git(repoDir, ["commit", "-m", "develop commit"]);
|
||||
git(repoDir, ["checkout", "-b", "feature/x"]);
|
||||
writeFileSync(join(repoDir, "feature-file.txt"), "feature\n", "utf-8");
|
||||
git(repoDir, ["add", "feature-file.txt"]);
|
||||
git(repoDir, ["commit", "-m", "feature commit"]);
|
||||
|
||||
const gitContext = await getGitContext();
|
||||
const diff = await runGitDiff("uncommitted", gitContext.defaultBranch);
|
||||
|
||||
const server = await startReviewServer({
|
||||
rawPatch: diff.patch,
|
||||
gitRef: diff.label,
|
||||
error: diff.error,
|
||||
diffType: "uncommitted",
|
||||
gitContext,
|
||||
origin: "pi",
|
||||
htmlContent: "<!doctype html><html><body>review</body></html>",
|
||||
});
|
||||
|
||||
try {
|
||||
// Initial load: server echoes the detected default as the active base.
|
||||
const initial = await fetch(`${server.url}/api/diff`).then((r) => r.json()) as {
|
||||
base?: string;
|
||||
gitContext?: { defaultBranch: string };
|
||||
};
|
||||
expect(initial.base).toBe(gitContext.defaultBranch);
|
||||
expect(initial.base).toBe(initial.gitContext?.defaultBranch);
|
||||
|
||||
// Switch to a custom base — response must echo the resolved base.
|
||||
const switchResponse = await fetch(`${server.url}/api/diff/switch`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({ diffType: "branch", base: "develop" }),
|
||||
});
|
||||
expect(switchResponse.status).toBe(200);
|
||||
const switched = await switchResponse.json() as { base?: string; diffType: string };
|
||||
expect(switched.base).toBe("develop");
|
||||
expect(switched.diffType).toBe("branch");
|
||||
|
||||
// Subsequent /api/diff load reflects the switched base — this is what
|
||||
// survives a page refresh / reconnect.
|
||||
const rehydrate = await fetch(`${server.url}/api/diff`).then((r) => r.json()) as {
|
||||
base?: string;
|
||||
};
|
||||
expect(rehydrate.base).toBe("develop");
|
||||
|
||||
// Unknown refs pass through verbatim — the resolver trusts callers so
|
||||
// unusual-but-valid refs (tags, SHAs, non-origin remotes) work. Truly
|
||||
// invalid refs surface via the diff error, not via a silent swap.
|
||||
const unknownResponse = await fetch(`${server.url}/api/diff/switch`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({ diffType: "branch", base: "nope-does-not-exist" }),
|
||||
});
|
||||
const unknown = await unknownResponse.json() as { base?: string; error?: string };
|
||||
expect(unknown.base).toBe("nope-does-not-exist");
|
||||
expect(unknown.error).toBeTruthy();
|
||||
|
||||
// Feedback to clean up the waitForDecision promise.
|
||||
await fetch(`${server.url}/api/feedback`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({ approved: false, feedback: "done", annotations: [] }),
|
||||
});
|
||||
await server.waitForDecision();
|
||||
} finally {
|
||||
server.stop();
|
||||
}
|
||||
}, 15_000);
|
||||
|
||||
test("initialBase overrides gitContext.defaultBranch in server state", async () => {
|
||||
// Simulates a programmatic caller (Pi event bus, other extensions) that
|
||||
// opens a review against a non-default base. The server's currentBase —
|
||||
// which drives /api/diff, agent prompts, and file-content fetches — must
|
||||
// honor that override instead of falling back to the detected default.
|
||||
const homeDir = makeTempDir("plannotator-pi-home-");
|
||||
const repoDir = initRepo();
|
||||
process.env.HOME = homeDir;
|
||||
process.chdir(repoDir);
|
||||
process.env.PLANNOTATOR_PORT = String(await reservePort());
|
||||
|
||||
git(repoDir, ["checkout", "-b", "develop"]);
|
||||
writeFileSync(join(repoDir, "develop-file.txt"), "develop\n", "utf-8");
|
||||
git(repoDir, ["add", "develop-file.txt"]);
|
||||
git(repoDir, ["commit", "-m", "develop commit"]);
|
||||
git(repoDir, ["checkout", "-b", "feature/x"]);
|
||||
|
||||
const gitContext = await getGitContext();
|
||||
// Detected default is "main"; caller explicitly wants "develop".
|
||||
expect(gitContext.defaultBranch).toBe("main");
|
||||
const diff = await runGitDiff("branch", "develop");
|
||||
|
||||
const server = await startReviewServer({
|
||||
rawPatch: diff.patch,
|
||||
gitRef: diff.label,
|
||||
error: diff.error,
|
||||
diffType: "branch",
|
||||
gitContext,
|
||||
initialBase: "develop",
|
||||
origin: "pi",
|
||||
htmlContent: "<!doctype html><html><body>review</body></html>",
|
||||
});
|
||||
|
||||
try {
|
||||
const payload = await fetch(`${server.url}/api/diff`).then((r) => r.json()) as {
|
||||
base?: string;
|
||||
gitContext?: { defaultBranch: string };
|
||||
};
|
||||
// The server must echo the caller's override, not the detected default.
|
||||
expect(payload.base).toBe("develop");
|
||||
expect(payload.gitContext?.defaultBranch).toBe("main");
|
||||
|
||||
await fetch(`${server.url}/api/feedback`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({ approved: false, feedback: "done", annotations: [] }),
|
||||
});
|
||||
await server.waitForDecision();
|
||||
} finally {
|
||||
server.stop();
|
||||
}
|
||||
}, 15_000);
|
||||
});
|
||||
28
extensions/plannotator/server.ts
Normal file
28
extensions/plannotator/server.ts
Normal file
@@ -0,0 +1,28 @@
|
||||
/**
|
||||
* Node-compatible servers for Plannotator Pi extension.
|
||||
*
|
||||
* Pi loads extensions via jiti (Node.js), so we can't use Bun.serve().
|
||||
* These are lightweight node:http servers implementing just the routes
|
||||
* each UI needs — plan review, code review, and markdown annotation.
|
||||
*/
|
||||
|
||||
export type {
|
||||
DiffOption,
|
||||
DiffType,
|
||||
GitContext,
|
||||
} from "./generated/review-core.js";
|
||||
export {
|
||||
type AnnotateServerResult,
|
||||
startAnnotateServer,
|
||||
} from "./server/serverAnnotate.js";
|
||||
export {
|
||||
type PlanServerResult,
|
||||
startPlanReviewServer,
|
||||
} from "./server/serverPlan.js";
|
||||
export {
|
||||
getGitContext,
|
||||
reviewRuntime,
|
||||
type ReviewServerResult,
|
||||
runGitDiff,
|
||||
startReviewServer,
|
||||
} from "./server/serverReview.js";
|
||||
515
extensions/plannotator/server/agent-jobs.ts
Normal file
515
extensions/plannotator/server/agent-jobs.ts
Normal file
@@ -0,0 +1,515 @@
|
||||
/**
|
||||
* Agent Jobs — Pi (node:http) server handler.
|
||||
*
|
||||
* Manages background agent processes (spawn, monitor, kill) and exposes
|
||||
* HTTP routes + SSE broadcasting for job status updates.
|
||||
*
|
||||
* Mirrors packages/server/agent-jobs.ts but uses node:http primitives.
|
||||
*/
|
||||
|
||||
import type { IncomingMessage, ServerResponse } from "node:http";
|
||||
import { spawn, execFileSync, type ChildProcess } from "node:child_process";
|
||||
import {
|
||||
type AgentJobInfo,
|
||||
type AgentJobEvent,
|
||||
type AgentCapability,
|
||||
type AgentCapabilities,
|
||||
isTerminalStatus,
|
||||
jobSource,
|
||||
serializeAgentSSEEvent,
|
||||
AGENT_HEARTBEAT_COMMENT,
|
||||
AGENT_HEARTBEAT_INTERVAL_MS,
|
||||
} from "../generated/agent-jobs.js";
|
||||
import { formatClaudeLogEvent } from "../generated/claude-review.js";
|
||||
import { json, parseBody } from "./helpers.js";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Route prefixes
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const BASE = "/api/agents";
|
||||
const JOBS = `${BASE}/jobs`;
|
||||
const JOBS_STREAM = `${JOBS}/stream`;
|
||||
const CAPABILITIES = `${BASE}/capabilities`;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// which() helper for Node.js
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function whichCmd(cmd: string): boolean {
|
||||
try {
|
||||
const bin = process.platform === "win32" ? "where" : "which";
|
||||
execFileSync(bin, [cmd], { encoding: "utf-8", stdio: ["pipe", "pipe", "pipe"] });
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Factory
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface AgentJobHandlerOptions {
|
||||
mode: "plan" | "review" | "annotate";
|
||||
getServerUrl: () => string;
|
||||
getCwd: () => string;
|
||||
/** Server-side command builder for known providers (codex, claude, tour). */
|
||||
buildCommand?: (provider: string, config?: Record<string, unknown>) => Promise<{
|
||||
command: string[];
|
||||
outputPath?: string;
|
||||
captureStdout?: boolean;
|
||||
stdinPrompt?: string;
|
||||
cwd?: string;
|
||||
prompt?: string;
|
||||
label?: string;
|
||||
/** Underlying engine used (e.g., "claude" or "codex"). Stored on AgentJobInfo for UI display. */
|
||||
engine?: string;
|
||||
/** Model used (e.g., "sonnet", "opus"). Stored on AgentJobInfo for UI display. */
|
||||
model?: string;
|
||||
/** Claude --effort level. */
|
||||
effort?: string;
|
||||
/** Codex reasoning effort level. */
|
||||
reasoningEffort?: string;
|
||||
/** Whether Codex fast mode was enabled. */
|
||||
fastMode?: boolean;
|
||||
/** PR URL at launch time. */
|
||||
prUrl?: string;
|
||||
/** PR diff scope at launch time. */
|
||||
diffScope?: string;
|
||||
/** Diff context snapshot at launch (stored on AgentJobInfo for per-job "Copy All"). */
|
||||
diffContext?: AgentJobInfo["diffContext"];
|
||||
} | null>;
|
||||
/** Called when a job completes successfully — parse results and push annotations. */
|
||||
onJobComplete?: (job: AgentJobInfo, meta: { outputPath?: string; stdout?: string; cwd?: string }) => void | Promise<void>;
|
||||
}
|
||||
|
||||
export function createAgentJobHandler(options: AgentJobHandlerOptions) {
|
||||
const { mode, getServerUrl, getCwd } = options;
|
||||
|
||||
// --- State ---
|
||||
const jobs = new Map<string, { info: AgentJobInfo; proc: ChildProcess | null }>();
|
||||
const jobOutputPaths = new Map<string, string>();
|
||||
const subscribers = new Set<ServerResponse>();
|
||||
let version = 0;
|
||||
|
||||
// --- Capability detection (run once) ---
|
||||
const capabilities: AgentCapability[] = [
|
||||
{ id: "claude", name: "Claude Code", available: whichCmd("claude") },
|
||||
{ id: "codex", name: "Codex CLI", available: whichCmd("codex") },
|
||||
{ id: "tour", name: "Code Tour", available: whichCmd("claude") || whichCmd("codex") },
|
||||
];
|
||||
const capabilitiesResponse: AgentCapabilities = {
|
||||
mode,
|
||||
providers: capabilities,
|
||||
available: capabilities.some((c) => c.available),
|
||||
};
|
||||
|
||||
// --- SSE broadcasting ---
|
||||
function broadcast(event: AgentJobEvent): void {
|
||||
version++;
|
||||
const data = serializeAgentSSEEvent(event);
|
||||
for (const res of subscribers) {
|
||||
try {
|
||||
res.write(data);
|
||||
} catch {
|
||||
subscribers.delete(res);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// --- Process lifecycle ---
|
||||
function spawnJob(
|
||||
provider: string,
|
||||
command: string[],
|
||||
label: string,
|
||||
outputPath?: string,
|
||||
spawnOptions?: { captureStdout?: boolean; stdinPrompt?: string; cwd?: string; prompt?: string; engine?: string; model?: string; effort?: string; reasoningEffort?: string; fastMode?: boolean; prUrl?: string; diffScope?: string; diffContext?: AgentJobInfo["diffContext"] },
|
||||
): AgentJobInfo {
|
||||
const id = crypto.randomUUID();
|
||||
const source = jobSource(id);
|
||||
|
||||
const info: AgentJobInfo = {
|
||||
id,
|
||||
source,
|
||||
provider,
|
||||
label,
|
||||
status: "starting",
|
||||
startedAt: Date.now(),
|
||||
command,
|
||||
cwd: getCwd(),
|
||||
...(spawnOptions?.engine && { engine: spawnOptions.engine }),
|
||||
...(spawnOptions?.model && { model: spawnOptions.model }),
|
||||
...(spawnOptions?.effort && { effort: spawnOptions.effort }),
|
||||
...(spawnOptions?.reasoningEffort && { reasoningEffort: spawnOptions.reasoningEffort }),
|
||||
...(spawnOptions?.fastMode && { fastMode: spawnOptions.fastMode }),
|
||||
...(spawnOptions?.prUrl && { prUrl: spawnOptions.prUrl }),
|
||||
...(spawnOptions?.diffScope && { diffScope: spawnOptions.diffScope }),
|
||||
...(spawnOptions?.diffContext && { diffContext: spawnOptions.diffContext }),
|
||||
};
|
||||
|
||||
let proc: ChildProcess | null = null;
|
||||
|
||||
try {
|
||||
const spawnCwd = spawnOptions?.cwd ?? getCwd();
|
||||
const captureStdout = spawnOptions?.captureStdout ?? false;
|
||||
const hasStdinPrompt = !!spawnOptions?.stdinPrompt;
|
||||
|
||||
proc = spawn(command[0], command.slice(1), {
|
||||
cwd: spawnCwd,
|
||||
stdio: [
|
||||
hasStdinPrompt ? "pipe" : "ignore",
|
||||
captureStdout ? "pipe" : "ignore",
|
||||
"pipe",
|
||||
],
|
||||
env: {
|
||||
...process.env,
|
||||
PLANNOTATOR_AGENT_SOURCE: source,
|
||||
PLANNOTATOR_API_URL: getServerUrl(),
|
||||
},
|
||||
});
|
||||
|
||||
// Write prompt to stdin and close (for providers that read prompt from stdin)
|
||||
if (hasStdinPrompt && proc.stdin) {
|
||||
proc.stdin.write(spawnOptions!.stdinPrompt!);
|
||||
proc.stdin.end();
|
||||
}
|
||||
|
||||
info.status = "running";
|
||||
info.cwd = spawnCwd;
|
||||
if (spawnOptions?.prompt) info.prompt = spawnOptions.prompt;
|
||||
jobs.set(id, { info, proc });
|
||||
if (outputPath) jobOutputPaths.set(id, outputPath);
|
||||
if (spawnOptions?.cwd) jobOutputPaths.set(`${id}:cwd`, spawnOptions.cwd);
|
||||
broadcast({ type: "job:started", job: { ...info } });
|
||||
|
||||
// --- Stdout capture (Claude JSONL streaming) ---
|
||||
let stdoutBuf = "";
|
||||
if (captureStdout && proc.stdout) {
|
||||
proc.stdout.on("data", (chunk: Buffer) => {
|
||||
const text = chunk.toString();
|
||||
stdoutBuf += text;
|
||||
|
||||
// Forward JSONL lines as log events
|
||||
const lines = text.split('\n');
|
||||
for (const line of lines) {
|
||||
if (!line.trim()) continue;
|
||||
// Tour jobs with the Claude engine also stream Claude JSONL.
|
||||
if (provider === "claude" || spawnOptions?.engine === "claude") {
|
||||
const formatted = formatClaudeLogEvent(line);
|
||||
if (formatted !== null) {
|
||||
broadcast({ type: "job:log", jobId: id, delta: formatted + '\n' });
|
||||
}
|
||||
continue;
|
||||
}
|
||||
try {
|
||||
const event = JSON.parse(line);
|
||||
if (event.type === 'result') continue;
|
||||
} catch { /* not JSON — forward as raw log */ }
|
||||
broadcast({ type: "job:log", jobId: id, delta: line + '\n' });
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// --- Stderr: buffer tail for errors + live log streaming ---
|
||||
let stderrBuf = "";
|
||||
let logPending = "";
|
||||
let logFlushTimer: ReturnType<typeof setTimeout> | null = null;
|
||||
|
||||
if (proc.stderr) {
|
||||
proc.stderr.on("data", (chunk: Buffer) => {
|
||||
const text = chunk.toString();
|
||||
stderrBuf = (stderrBuf + text).slice(-500);
|
||||
logPending += text;
|
||||
|
||||
if (!logFlushTimer) {
|
||||
logFlushTimer = setTimeout(() => {
|
||||
if (logPending) {
|
||||
broadcast({ type: "job:log", jobId: id, delta: logPending });
|
||||
logPending = "";
|
||||
}
|
||||
logFlushTimer = null;
|
||||
}, 200);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Monitor process close (fires after stdio streams are fully drained,
|
||||
// unlike 'exit' which fires before — critical for stdout capture)
|
||||
proc.on("close", async (exitCode) => {
|
||||
// Flush remaining stderr
|
||||
if (logFlushTimer) { clearTimeout(logFlushTimer); logFlushTimer = null; }
|
||||
if (logPending) {
|
||||
broadcast({ type: "job:log", jobId: id, delta: logPending });
|
||||
logPending = "";
|
||||
}
|
||||
|
||||
const entry = jobs.get(id);
|
||||
if (!entry || isTerminalStatus(entry.info.status)) return;
|
||||
|
||||
entry.info.endedAt = Date.now();
|
||||
entry.info.exitCode = exitCode ?? undefined;
|
||||
entry.info.status = exitCode === 0 ? "done" : "failed";
|
||||
|
||||
if (exitCode !== 0 && stderrBuf) {
|
||||
entry.info.error = stderrBuf;
|
||||
}
|
||||
|
||||
// Ingest results before broadcasting completion
|
||||
const jobOutputPath = jobOutputPaths.get(id);
|
||||
const jobCwd = jobOutputPaths.get(`${id}:cwd`);
|
||||
if (exitCode === 0 && options.onJobComplete) {
|
||||
try {
|
||||
await options.onJobComplete(entry.info, {
|
||||
outputPath: jobOutputPath,
|
||||
stdout: captureStdout ? stdoutBuf : undefined,
|
||||
cwd: jobCwd,
|
||||
});
|
||||
} catch {
|
||||
// Result ingestion failure shouldn't prevent job completion broadcast
|
||||
}
|
||||
}
|
||||
jobOutputPaths.delete(id);
|
||||
jobOutputPaths.delete(`${id}:cwd`);
|
||||
|
||||
broadcast({ type: "job:completed", job: { ...entry.info } });
|
||||
});
|
||||
|
||||
// Handle spawn errors after process starts
|
||||
proc.on("error", (err) => {
|
||||
const entry = jobs.get(id);
|
||||
if (!entry || isTerminalStatus(entry.info.status)) return;
|
||||
|
||||
entry.info.status = "failed";
|
||||
entry.info.endedAt = Date.now();
|
||||
entry.info.error = err.message;
|
||||
broadcast({ type: "job:completed", job: { ...entry.info } });
|
||||
});
|
||||
} catch (err) {
|
||||
jobs.set(id, { info, proc: null });
|
||||
broadcast({ type: "job:started", job: { ...info } });
|
||||
|
||||
info.status = "failed";
|
||||
info.endedAt = Date.now();
|
||||
info.error = err instanceof Error ? err.message : String(err);
|
||||
broadcast({ type: "job:completed", job: { ...info } });
|
||||
}
|
||||
|
||||
return { ...info };
|
||||
}
|
||||
|
||||
function killJob(id: string): boolean {
|
||||
const entry = jobs.get(id);
|
||||
if (!entry || isTerminalStatus(entry.info.status)) return false;
|
||||
|
||||
if (entry.proc) {
|
||||
try {
|
||||
entry.proc.kill();
|
||||
} catch {
|
||||
// Process may have already exited
|
||||
}
|
||||
}
|
||||
|
||||
entry.info.status = "killed";
|
||||
entry.info.endedAt = Date.now();
|
||||
jobOutputPaths.delete(id);
|
||||
jobOutputPaths.delete(`${id}:cwd`);
|
||||
broadcast({ type: "job:completed", job: { ...entry.info } });
|
||||
return true;
|
||||
}
|
||||
|
||||
function killAll(): number {
|
||||
let count = 0;
|
||||
for (const [id, entry] of jobs) {
|
||||
if (!isTerminalStatus(entry.info.status)) {
|
||||
killJob(id);
|
||||
count++;
|
||||
}
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
function getAllJobs(): AgentJobInfo[] {
|
||||
return Array.from(jobs.values()).map((e) => ({ ...e.info }));
|
||||
}
|
||||
|
||||
// --- HTTP handler ---
|
||||
return {
|
||||
killAll,
|
||||
|
||||
async handle(
|
||||
req: IncomingMessage,
|
||||
res: ServerResponse,
|
||||
url: URL,
|
||||
): Promise<boolean> {
|
||||
// --- GET /api/agents/capabilities ---
|
||||
if (url.pathname === CAPABILITIES && req.method === "GET") {
|
||||
json(res, capabilitiesResponse);
|
||||
return true;
|
||||
}
|
||||
|
||||
// --- SSE stream ---
|
||||
if (url.pathname === JOBS_STREAM && req.method === "GET") {
|
||||
res.writeHead(200, {
|
||||
"Content-Type": "text/event-stream",
|
||||
"Cache-Control": "no-cache",
|
||||
Connection: "keep-alive",
|
||||
});
|
||||
|
||||
res.setTimeout(0);
|
||||
|
||||
// Send current state as snapshot
|
||||
const snapshot: AgentJobEvent = {
|
||||
type: "snapshot",
|
||||
jobs: getAllJobs(),
|
||||
};
|
||||
res.write(serializeAgentSSEEvent(snapshot));
|
||||
|
||||
subscribers.add(res);
|
||||
|
||||
// Heartbeat to keep connection alive
|
||||
const heartbeatTimer = setInterval(() => {
|
||||
try {
|
||||
res.write(AGENT_HEARTBEAT_COMMENT);
|
||||
} catch {
|
||||
clearInterval(heartbeatTimer);
|
||||
subscribers.delete(res);
|
||||
}
|
||||
}, AGENT_HEARTBEAT_INTERVAL_MS);
|
||||
|
||||
// Clean up on disconnect
|
||||
res.on("close", () => {
|
||||
clearInterval(heartbeatTimer);
|
||||
subscribers.delete(res);
|
||||
});
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
// --- GET /api/agents/jobs (snapshot / polling fallback) ---
|
||||
if (url.pathname === JOBS && req.method === "GET") {
|
||||
const since = url.searchParams.get("since");
|
||||
if (since !== null) {
|
||||
const sinceVersion = parseInt(since, 10);
|
||||
if (!isNaN(sinceVersion) && sinceVersion === version) {
|
||||
res.writeHead(304);
|
||||
res.end();
|
||||
return true;
|
||||
}
|
||||
}
|
||||
json(res, { jobs: getAllJobs(), version });
|
||||
return true;
|
||||
}
|
||||
|
||||
// --- POST /api/agents/jobs (launch) ---
|
||||
if (url.pathname === JOBS && req.method === "POST") {
|
||||
try {
|
||||
const body = await parseBody(req);
|
||||
const provider = typeof body.provider === "string" ? body.provider : "";
|
||||
let rawCommand = Array.isArray(body.command) ? body.command : [];
|
||||
let command = rawCommand.filter((c: unknown): c is string => typeof c === "string");
|
||||
let label = typeof body.label === "string" ? body.label : `${provider} agent`;
|
||||
let outputPath: string | undefined;
|
||||
|
||||
// Validate provider is a known, available capability
|
||||
const cap = capabilities.find((c) => c.id === provider);
|
||||
if (!cap || !cap.available) {
|
||||
json(res, { error: `Unknown or unavailable provider: ${provider}` }, 400);
|
||||
return true;
|
||||
}
|
||||
|
||||
// Try server-side command building for known providers
|
||||
let captureStdout = false;
|
||||
let stdinPrompt: string | undefined;
|
||||
let spawnCwd: string | undefined;
|
||||
let promptText: string | undefined;
|
||||
let jobEngine: string | undefined;
|
||||
let jobModel: string | undefined;
|
||||
let jobEffort: string | undefined;
|
||||
let jobReasoningEffort: string | undefined;
|
||||
let jobFastMode: boolean | undefined;
|
||||
let jobPrUrl: string | undefined;
|
||||
let jobDiffScope: string | undefined;
|
||||
let jobDiffContext: AgentJobInfo["diffContext"] | undefined;
|
||||
if (options.buildCommand) {
|
||||
// Thread config from POST body to buildCommand
|
||||
const config: Record<string, unknown> = {};
|
||||
if (typeof body.engine === "string") config.engine = body.engine;
|
||||
if (typeof body.model === "string") config.model = body.model;
|
||||
if (typeof body.reasoningEffort === "string") config.reasoningEffort = body.reasoningEffort;
|
||||
if (typeof body.effort === "string") config.effort = body.effort;
|
||||
if (body.fastMode === true) config.fastMode = true;
|
||||
const built = await options.buildCommand(provider, Object.keys(config).length > 0 ? config : undefined);
|
||||
if (built) {
|
||||
command = built.command;
|
||||
outputPath = built.outputPath;
|
||||
captureStdout = built.captureStdout ?? false;
|
||||
stdinPrompt = built.stdinPrompt;
|
||||
spawnCwd = built.cwd;
|
||||
promptText = built.prompt;
|
||||
if (built.label) label = built.label;
|
||||
jobEngine = built.engine;
|
||||
jobModel = built.model;
|
||||
jobEffort = built.effort;
|
||||
jobReasoningEffort = built.reasoningEffort;
|
||||
jobFastMode = built.fastMode;
|
||||
jobPrUrl = built.prUrl;
|
||||
jobDiffScope = built.diffScope;
|
||||
jobDiffContext = built.diffContext;
|
||||
}
|
||||
}
|
||||
|
||||
if (command.length === 0) {
|
||||
json(res, { error: 'Missing "command" array' }, 400);
|
||||
return true;
|
||||
}
|
||||
|
||||
const job = spawnJob(provider, command, label, outputPath, {
|
||||
captureStdout,
|
||||
stdinPrompt,
|
||||
cwd: spawnCwd,
|
||||
prompt: promptText,
|
||||
engine: jobEngine,
|
||||
model: jobModel,
|
||||
effort: jobEffort,
|
||||
reasoningEffort: jobReasoningEffort,
|
||||
fastMode: jobFastMode,
|
||||
prUrl: jobPrUrl,
|
||||
diffScope: jobDiffScope,
|
||||
diffContext: jobDiffContext,
|
||||
});
|
||||
json(res, { job }, 201);
|
||||
} catch {
|
||||
json(res, { error: "Invalid JSON" }, 400);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
// --- DELETE /api/agents/jobs/:id (kill one) ---
|
||||
if (url.pathname.startsWith(JOBS + "/") && url.pathname !== JOBS_STREAM && req.method === "DELETE") {
|
||||
const id = url.pathname.slice(JOBS.length + 1);
|
||||
if (!id) {
|
||||
json(res, { error: "Missing job ID" }, 400);
|
||||
return true;
|
||||
}
|
||||
const found = killJob(id);
|
||||
if (!found) {
|
||||
json(res, { error: "Job not found or already terminal" }, 404);
|
||||
return true;
|
||||
}
|
||||
json(res, { ok: true });
|
||||
return true;
|
||||
}
|
||||
|
||||
// --- DELETE /api/agents/jobs (kill all) ---
|
||||
if (url.pathname === JOBS && req.method === "DELETE") {
|
||||
const count = killAll();
|
||||
json(res, { ok: true, killed: count });
|
||||
return true;
|
||||
}
|
||||
|
||||
// Not handled
|
||||
return false;
|
||||
},
|
||||
};
|
||||
}
|
||||
85
extensions/plannotator/server/annotations.ts
Normal file
85
extensions/plannotator/server/annotations.ts
Normal file
@@ -0,0 +1,85 @@
|
||||
/**
|
||||
* Editor annotation handler (in-memory store for VS Code integration).
|
||||
* EditorAnnotation type, createEditorAnnotationHandler
|
||||
*/
|
||||
|
||||
import { randomUUID } from "node:crypto";
|
||||
import type { IncomingMessage } from "node:http";
|
||||
import { json, parseBody } from "./helpers";
|
||||
|
||||
interface EditorAnnotation {
|
||||
id: string;
|
||||
filePath: string;
|
||||
selectedText: string;
|
||||
lineStart: number;
|
||||
lineEnd: number;
|
||||
comment?: string;
|
||||
createdAt: number;
|
||||
}
|
||||
|
||||
export function createEditorAnnotationHandler() {
|
||||
const annotations: EditorAnnotation[] = [];
|
||||
|
||||
return {
|
||||
async handle(
|
||||
req: IncomingMessage,
|
||||
res: import("node:http").ServerResponse,
|
||||
url: URL,
|
||||
): Promise<boolean> {
|
||||
if (url.pathname === "/api/editor-annotations" && req.method === "GET") {
|
||||
json(res, { annotations });
|
||||
return true;
|
||||
}
|
||||
|
||||
if (url.pathname === "/api/editor-annotation" && req.method === "POST") {
|
||||
try {
|
||||
const body = await parseBody(req);
|
||||
if (
|
||||
!body.filePath ||
|
||||
!body.selectedText ||
|
||||
!body.lineStart ||
|
||||
!body.lineEnd
|
||||
) {
|
||||
json(res, { error: "Missing required fields" }, 400);
|
||||
return true;
|
||||
}
|
||||
|
||||
const annotation: EditorAnnotation = {
|
||||
id: randomUUID(),
|
||||
filePath: String(body.filePath),
|
||||
selectedText: String(body.selectedText),
|
||||
lineStart: Number(body.lineStart),
|
||||
lineEnd: Number(body.lineEnd),
|
||||
comment: typeof body.comment === "string" ? body.comment : undefined,
|
||||
createdAt: Date.now(),
|
||||
};
|
||||
|
||||
annotations.push(annotation);
|
||||
json(res, { id: annotation.id });
|
||||
} catch {
|
||||
json(res, { error: "Invalid JSON" }, 400);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
if (
|
||||
url.pathname === "/api/editor-annotation" &&
|
||||
req.method === "DELETE"
|
||||
) {
|
||||
const id = url.searchParams.get("id");
|
||||
if (!id) {
|
||||
json(res, { error: "Missing id parameter" }, 400);
|
||||
return true;
|
||||
}
|
||||
const idx = annotations.findIndex((annotation) => annotation.id === id);
|
||||
if (idx !== -1) {
|
||||
annotations.splice(idx, 1);
|
||||
}
|
||||
json(res, { ok: true });
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
},
|
||||
};
|
||||
}
|
||||
189
extensions/plannotator/server/external-annotations.ts
Normal file
189
extensions/plannotator/server/external-annotations.ts
Normal file
@@ -0,0 +1,189 @@
|
||||
/**
|
||||
* External Annotations — Pi (node:http) server handler.
|
||||
*
|
||||
* Thin HTTP adapter over the shared annotation store. Mirrors the Bun
|
||||
* handler at packages/server/external-annotations.ts but uses node:http
|
||||
* IncomingMessage/ServerResponse + res.write() for SSE.
|
||||
*/
|
||||
|
||||
import type { IncomingMessage, ServerResponse } from "node:http";
|
||||
import {
|
||||
createAnnotationStore,
|
||||
transformPlanInput,
|
||||
transformReviewInput,
|
||||
serializeSSEEvent,
|
||||
HEARTBEAT_COMMENT,
|
||||
HEARTBEAT_INTERVAL_MS,
|
||||
type StorableAnnotation,
|
||||
type ExternalAnnotationEvent,
|
||||
} from "../generated/external-annotation.js";
|
||||
import { json, parseBody } from "./helpers.js";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Route prefix
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const BASE = "/api/external-annotations";
|
||||
const STREAM = `${BASE}/stream`;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Factory
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export function createExternalAnnotationHandler(mode: "plan" | "review") {
|
||||
const store = createAnnotationStore<StorableAnnotation>();
|
||||
const subscribers = new Set<ServerResponse>();
|
||||
const transform = mode === "plan" ? transformPlanInput : transformReviewInput;
|
||||
|
||||
// Wire store mutations → SSE broadcast
|
||||
store.onMutation((event: ExternalAnnotationEvent<StorableAnnotation>) => {
|
||||
const data = serializeSSEEvent(event);
|
||||
for (const res of subscribers) {
|
||||
try {
|
||||
res.write(data);
|
||||
} catch {
|
||||
// Response closed — clean up
|
||||
subscribers.delete(res);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
/** Push annotations directly into the store (bypasses HTTP, reuses same validation). */
|
||||
addAnnotations(body: unknown): { ids: string[] } | { error: string } {
|
||||
const parsed = transform(body);
|
||||
if ("error" in parsed) return { error: parsed.error };
|
||||
const created = store.add(parsed.annotations);
|
||||
return { ids: created.map((a: { id: string }) => a.id) };
|
||||
},
|
||||
|
||||
async handle(
|
||||
req: IncomingMessage,
|
||||
res: ServerResponse,
|
||||
url: URL,
|
||||
): Promise<boolean> {
|
||||
// --- SSE stream ---
|
||||
if (url.pathname === STREAM && req.method === "GET") {
|
||||
res.writeHead(200, {
|
||||
"Content-Type": "text/event-stream",
|
||||
"Cache-Control": "no-cache",
|
||||
Connection: "keep-alive",
|
||||
});
|
||||
|
||||
// Disable idle timeout for SSE connections
|
||||
res.setTimeout(0);
|
||||
|
||||
// Send current state as snapshot
|
||||
const snapshot: ExternalAnnotationEvent<StorableAnnotation> = {
|
||||
type: "snapshot",
|
||||
annotations: store.getAll(),
|
||||
};
|
||||
res.write(serializeSSEEvent(snapshot));
|
||||
|
||||
subscribers.add(res);
|
||||
|
||||
// Heartbeat to keep connection alive
|
||||
const heartbeatTimer = setInterval(() => {
|
||||
try {
|
||||
res.write(HEARTBEAT_COMMENT);
|
||||
} catch {
|
||||
clearInterval(heartbeatTimer);
|
||||
subscribers.delete(res);
|
||||
}
|
||||
}, HEARTBEAT_INTERVAL_MS);
|
||||
|
||||
// Clean up on disconnect
|
||||
res.on("close", () => {
|
||||
clearInterval(heartbeatTimer);
|
||||
subscribers.delete(res);
|
||||
});
|
||||
|
||||
// Don't end the response — SSE stays open
|
||||
return true;
|
||||
}
|
||||
|
||||
// --- GET snapshot (polling fallback) ---
|
||||
if (url.pathname === BASE && req.method === "GET") {
|
||||
const since = url.searchParams.get("since");
|
||||
if (since !== null) {
|
||||
const sinceVersion = parseInt(since, 10);
|
||||
if (!isNaN(sinceVersion) && sinceVersion === store.version) {
|
||||
res.writeHead(304);
|
||||
res.end();
|
||||
return true;
|
||||
}
|
||||
}
|
||||
json(res, {
|
||||
annotations: store.getAll(),
|
||||
version: store.version,
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// --- POST (add single or batch) ---
|
||||
if (url.pathname === BASE && req.method === "POST") {
|
||||
try {
|
||||
const body = await parseBody(req);
|
||||
const parsed = transform(body);
|
||||
|
||||
if ("error" in parsed) {
|
||||
json(res, { error: parsed.error }, 400);
|
||||
return true;
|
||||
}
|
||||
|
||||
const created = store.add(parsed.annotations);
|
||||
json(res, { ids: created.map((a: StorableAnnotation) => a.id) }, 201);
|
||||
} catch {
|
||||
json(res, { error: "Invalid JSON" }, 400);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
// --- PATCH (update fields on a single annotation) ---
|
||||
if (url.pathname === BASE && req.method === "PATCH") {
|
||||
const id = url.searchParams.get("id");
|
||||
if (!id) {
|
||||
json(res, { error: "Missing ?id parameter" }, 400);
|
||||
return true;
|
||||
}
|
||||
try {
|
||||
const body = await parseBody(req);
|
||||
const updated = store.update(id, body as Partial<StorableAnnotation>);
|
||||
if (!updated) {
|
||||
json(res, { error: "Not found" }, 404);
|
||||
return true;
|
||||
}
|
||||
json(res, { annotation: updated });
|
||||
} catch {
|
||||
json(res, { error: "Invalid JSON" }, 400);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
// --- DELETE (by id, by source, or clear all) ---
|
||||
if (url.pathname === BASE && req.method === "DELETE") {
|
||||
const id = url.searchParams.get("id");
|
||||
const source = url.searchParams.get("source");
|
||||
|
||||
if (id) {
|
||||
store.remove(id);
|
||||
json(res, { ok: true });
|
||||
return true;
|
||||
}
|
||||
|
||||
if (source) {
|
||||
const count = store.clearBySource(source);
|
||||
json(res, { ok: true, removed: count });
|
||||
return true;
|
||||
}
|
||||
|
||||
const count = store.clearAll();
|
||||
json(res, { ok: true, removed: count });
|
||||
return true;
|
||||
}
|
||||
|
||||
// Not handled — pass through
|
||||
return false;
|
||||
},
|
||||
};
|
||||
}
|
||||
210
extensions/plannotator/server/handlers.ts
Normal file
210
extensions/plannotator/server/handlers.ts
Normal file
@@ -0,0 +1,210 @@
|
||||
/**
|
||||
* Shared request handlers reused across plan, review, and annotate servers.
|
||||
* handleImageRequest, handleUploadRequest, handleDraftRequest, handleFavicon
|
||||
*/
|
||||
|
||||
import { randomUUID } from "node:crypto";
|
||||
import { existsSync, mkdirSync, readFileSync, writeFileSync } from "node:fs";
|
||||
import type { IncomingMessage } from "node:http";
|
||||
import { tmpdir } from "node:os";
|
||||
import { join, resolve as resolvePath } from "node:path";
|
||||
import { saveDraft, loadDraft, deleteDraft } from "../generated/draft.js";
|
||||
import { FAVICON_SVG } from "../generated/favicon.js";
|
||||
|
||||
import { json, parseBody, send, toWebRequest } from "./helpers";
|
||||
|
||||
type Res = import("node:http").ServerResponse;
|
||||
|
||||
const ALLOWED_IMAGE_EXTENSIONS = new Set([
|
||||
"png",
|
||||
"jpg",
|
||||
"jpeg",
|
||||
"gif",
|
||||
"webp",
|
||||
"svg",
|
||||
"bmp",
|
||||
"ico",
|
||||
"tiff",
|
||||
"tif",
|
||||
"avif",
|
||||
]);
|
||||
|
||||
const IMAGE_CONTENT_TYPES: Record<string, string> = {
|
||||
png: "image/png",
|
||||
jpg: "image/jpeg",
|
||||
jpeg: "image/jpeg",
|
||||
gif: "image/gif",
|
||||
webp: "image/webp",
|
||||
svg: "image/svg+xml",
|
||||
bmp: "image/bmp",
|
||||
ico: "image/x-icon",
|
||||
tiff: "image/tiff",
|
||||
tif: "image/tiff",
|
||||
avif: "image/avif",
|
||||
};
|
||||
|
||||
const UPLOAD_DIR = join(tmpdir(), "plannotator");
|
||||
|
||||
function getExtension(filePath: string): string {
|
||||
const lastDot = filePath.lastIndexOf(".");
|
||||
if (lastDot === -1) return "";
|
||||
return filePath.slice(lastDot + 1).toLowerCase();
|
||||
}
|
||||
|
||||
function validateImagePath(rawPath: string): {
|
||||
valid: boolean;
|
||||
resolved: string;
|
||||
error?: string;
|
||||
} {
|
||||
const resolved = resolvePath(rawPath);
|
||||
const ext = getExtension(resolved);
|
||||
|
||||
if (!ALLOWED_IMAGE_EXTENSIONS.has(ext)) {
|
||||
return {
|
||||
valid: false,
|
||||
resolved,
|
||||
error: "Path does not point to a supported image file",
|
||||
};
|
||||
}
|
||||
|
||||
return { valid: true, resolved };
|
||||
}
|
||||
|
||||
function validateUploadExtension(fileName: string): {
|
||||
valid: boolean;
|
||||
ext: string;
|
||||
error?: string;
|
||||
} {
|
||||
const ext = getExtension(fileName) || "png";
|
||||
if (!ALLOWED_IMAGE_EXTENSIONS.has(ext)) {
|
||||
return {
|
||||
valid: false,
|
||||
ext,
|
||||
error: `File extension ".${ext}" is not a supported image type`,
|
||||
};
|
||||
}
|
||||
|
||||
return { valid: true, ext };
|
||||
}
|
||||
|
||||
function getImageContentType(filePath: string): string {
|
||||
return (
|
||||
IMAGE_CONTENT_TYPES[getExtension(filePath)] || "application/octet-stream"
|
||||
);
|
||||
}
|
||||
|
||||
export function handleImageRequest(res: Res, url: URL): void {
|
||||
const imagePath = url.searchParams.get("path");
|
||||
if (!imagePath) {
|
||||
send(res, "Missing path parameter", 400, { "Content-Type": "text/plain" });
|
||||
return;
|
||||
}
|
||||
|
||||
const tryServePath = (candidate: string): boolean => {
|
||||
const validation = validateImagePath(candidate);
|
||||
if (!validation.valid) return false;
|
||||
try {
|
||||
if (!existsSync(validation.resolved)) return false;
|
||||
const data = readFileSync(validation.resolved);
|
||||
send(res, data, 200, {
|
||||
"Content-Type": getImageContentType(validation.resolved),
|
||||
});
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
};
|
||||
|
||||
if (tryServePath(imagePath)) return;
|
||||
|
||||
const base = url.searchParams.get("base");
|
||||
if (
|
||||
base &&
|
||||
!imagePath.startsWith("/") &&
|
||||
tryServePath(resolvePath(base, imagePath))
|
||||
) {
|
||||
return;
|
||||
}
|
||||
|
||||
const validation = validateImagePath(imagePath);
|
||||
if (!validation.valid) {
|
||||
send(res, validation.error || "Invalid image path", 403, {
|
||||
"Content-Type": "text/plain",
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
send(res, "File not found", 404, { "Content-Type": "text/plain" });
|
||||
}
|
||||
|
||||
export async function handleUploadRequest(
|
||||
req: IncomingMessage,
|
||||
res: Res,
|
||||
): Promise<void> {
|
||||
try {
|
||||
const request = toWebRequest(req);
|
||||
const formData = await request.formData();
|
||||
const file = formData.get("file");
|
||||
if (
|
||||
!file ||
|
||||
typeof file !== "object" ||
|
||||
!("arrayBuffer" in file) ||
|
||||
!("name" in file)
|
||||
) {
|
||||
json(res, { error: "No file provided" }, 400);
|
||||
return;
|
||||
}
|
||||
|
||||
const upload = file as File;
|
||||
const extResult = validateUploadExtension(upload.name);
|
||||
if (!extResult.valid) {
|
||||
json(res, { error: extResult.error }, 400);
|
||||
return;
|
||||
}
|
||||
|
||||
mkdirSync(UPLOAD_DIR, { recursive: true });
|
||||
const tempPath = join(UPLOAD_DIR, `${randomUUID()}.${extResult.ext}`);
|
||||
const bytes = Buffer.from(await upload.arrayBuffer());
|
||||
writeFileSync(tempPath, bytes);
|
||||
json(res, { path: tempPath, originalName: upload.name });
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : "Upload failed";
|
||||
json(res, { error: message }, 500);
|
||||
}
|
||||
}
|
||||
|
||||
export function handleDraftRequest(
|
||||
req: IncomingMessage,
|
||||
res: Res,
|
||||
draftKey: string,
|
||||
): Promise<void> | void {
|
||||
if (req.method === "POST") {
|
||||
return parseBody(req)
|
||||
.then((body) => {
|
||||
saveDraft(draftKey, body);
|
||||
json(res, { ok: true });
|
||||
})
|
||||
.catch((err: unknown) => {
|
||||
const message = err instanceof Error ? err.message : "Failed to save draft";
|
||||
console.error(`[draft] save failed: ${message}`);
|
||||
json(res, { error: message }, 500);
|
||||
});
|
||||
} else if (req.method === "DELETE") {
|
||||
deleteDraft(draftKey);
|
||||
json(res, { ok: true });
|
||||
} else {
|
||||
const draft = loadDraft(draftKey);
|
||||
if (!draft) {
|
||||
json(res, { found: false }, 404);
|
||||
return;
|
||||
}
|
||||
json(res, draft);
|
||||
}
|
||||
}
|
||||
|
||||
export function handleFavicon(res: Res): void {
|
||||
send(res, FAVICON_SVG, 200, {
|
||||
"Content-Type": "image/svg+xml",
|
||||
"Cache-Control": "public, max-age=86400",
|
||||
});
|
||||
}
|
||||
78
extensions/plannotator/server/helpers.ts
Normal file
78
extensions/plannotator/server/helpers.ts
Normal file
@@ -0,0 +1,78 @@
|
||||
/**
|
||||
* Core HTTP helpers for Pi extension servers.
|
||||
* parseBody, json, html, send, toWebRequest
|
||||
*/
|
||||
|
||||
import type { IncomingMessage } from "node:http";
|
||||
import { Readable } from "node:stream";
|
||||
|
||||
export function parseBody(
|
||||
req: IncomingMessage,
|
||||
): Promise<Record<string, unknown>> {
|
||||
return new Promise((resolve) => {
|
||||
let data = "";
|
||||
req.on("data", (chunk: string) => (data += chunk));
|
||||
req.on("end", () => {
|
||||
try {
|
||||
resolve(JSON.parse(data));
|
||||
} catch {
|
||||
resolve({});
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
export function json(
|
||||
res: import("node:http").ServerResponse,
|
||||
data: unknown,
|
||||
status = 200,
|
||||
): void {
|
||||
res.writeHead(status, { "Content-Type": "application/json" });
|
||||
res.end(JSON.stringify(data));
|
||||
}
|
||||
|
||||
export function html(
|
||||
res: import("node:http").ServerResponse,
|
||||
content: string,
|
||||
): void {
|
||||
res.writeHead(200, { "Content-Type": "text/html" });
|
||||
res.end(content);
|
||||
}
|
||||
|
||||
export function send(
|
||||
res: import("node:http").ServerResponse,
|
||||
body: string | Buffer,
|
||||
status = 200,
|
||||
headers: Record<string, string> = {},
|
||||
): void {
|
||||
res.writeHead(status, headers);
|
||||
res.end(body);
|
||||
}
|
||||
|
||||
export function requestUrl(req: IncomingMessage): URL {
|
||||
return new URL(req.url ?? "/", "http://localhost");
|
||||
}
|
||||
|
||||
export function toWebRequest(req: IncomingMessage): Request {
|
||||
const headers = new Headers();
|
||||
for (const [key, value] of Object.entries(req.headers)) {
|
||||
if (value === undefined) continue;
|
||||
if (Array.isArray(value)) {
|
||||
for (const item of value) headers.append(key, item);
|
||||
} else {
|
||||
headers.set(key, value);
|
||||
}
|
||||
}
|
||||
|
||||
const init: RequestInit & { duplex?: "half" } = {
|
||||
method: req.method,
|
||||
headers,
|
||||
};
|
||||
|
||||
if (req.method !== "GET" && req.method !== "HEAD") {
|
||||
init.body = Readable.toWeb(req) as unknown as BodyInit;
|
||||
init.duplex = "half";
|
||||
}
|
||||
|
||||
return new Request(`http://localhost${req.url ?? "/"}`, init);
|
||||
}
|
||||
46
extensions/plannotator/server/ide.ts
Normal file
46
extensions/plannotator/server/ide.ts
Normal file
@@ -0,0 +1,46 @@
|
||||
/**
|
||||
* IDE integration — open plan diffs in VS Code.
|
||||
* Node.js equivalent of packages/server/ide.ts.
|
||||
*/
|
||||
|
||||
import { spawn } from "node:child_process";
|
||||
|
||||
/** Open two files in VS Code's diff viewer. Node.js equivalent of packages/server/ide.ts */
|
||||
export function openEditorDiff(
|
||||
oldPath: string,
|
||||
newPath: string,
|
||||
): Promise<{ ok: true } | { error: string }> {
|
||||
return new Promise((resolve) => {
|
||||
const proc = spawn("code", ["--diff", oldPath, newPath], {
|
||||
stdio: ["ignore", "ignore", "pipe"],
|
||||
});
|
||||
let stderr = "";
|
||||
proc.stderr?.on("data", (chunk: Buffer) => {
|
||||
stderr += chunk.toString();
|
||||
});
|
||||
proc.on("error", (err) => {
|
||||
if (err.message.includes("ENOENT")) {
|
||||
resolve({
|
||||
error:
|
||||
"VS Code CLI not found. Run 'Shell Command: Install code command in PATH' from the VS Code command palette.",
|
||||
});
|
||||
} else {
|
||||
resolve({ error: err.message });
|
||||
}
|
||||
});
|
||||
proc.on("close", (code) => {
|
||||
if (code !== 0) {
|
||||
if (stderr.includes("not found") || stderr.includes("ENOENT")) {
|
||||
resolve({
|
||||
error:
|
||||
"VS Code CLI not found. Run 'Shell Command: Install code command in PATH' from the VS Code command palette.",
|
||||
});
|
||||
} else {
|
||||
resolve({ error: `code --diff exited with ${code}: ${stderr}` });
|
||||
}
|
||||
} else {
|
||||
resolve({ ok: true });
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
195
extensions/plannotator/server/integrations.ts
Normal file
195
extensions/plannotator/server/integrations.ts
Normal file
@@ -0,0 +1,195 @@
|
||||
/**
|
||||
* Note-taking app integrations (Obsidian, Bear, Octarine).
|
||||
* Node.js equivalents of packages/server/integrations.ts.
|
||||
* Config types, save functions, tag extraction, filename generation
|
||||
*/
|
||||
|
||||
import { execSync, spawn } from "node:child_process";
|
||||
import { existsSync, mkdirSync, statSync, writeFileSync } from "node:fs";
|
||||
import { basename, join } from "node:path";
|
||||
|
||||
import {
|
||||
type ObsidianConfig,
|
||||
type BearConfig,
|
||||
type OctarineConfig,
|
||||
type IntegrationResult,
|
||||
extractTitle,
|
||||
generateFrontmatter,
|
||||
generateFilename,
|
||||
generateOctarineFrontmatter,
|
||||
stripH1,
|
||||
buildHashtags,
|
||||
buildBearContent,
|
||||
detectObsidianVaults,
|
||||
} from "../generated/integrations-common.js";
|
||||
import { sanitizeTag } from "../generated/project.js";
|
||||
import { resolveUserPath } from "../generated/resolve-file.js";
|
||||
|
||||
export type { ObsidianConfig, BearConfig, OctarineConfig, IntegrationResult };
|
||||
export {
|
||||
extractTitle,
|
||||
generateFrontmatter,
|
||||
generateFilename,
|
||||
generateOctarineFrontmatter,
|
||||
stripH1,
|
||||
buildHashtags,
|
||||
buildBearContent,
|
||||
detectObsidianVaults,
|
||||
};
|
||||
|
||||
/** Detect project name from git or cwd (sync). Used by extractTags for note integrations. */
|
||||
function detectProjectNameSync(): string | null {
|
||||
try {
|
||||
const toplevel = execSync("git rev-parse --show-toplevel", {
|
||||
encoding: "utf-8",
|
||||
stdio: ["pipe", "pipe", "pipe"],
|
||||
}).trim();
|
||||
if (toplevel) {
|
||||
const name = sanitizeTag(basename(toplevel));
|
||||
if (name) return name;
|
||||
}
|
||||
} catch {
|
||||
/* not in a git repo */
|
||||
}
|
||||
try {
|
||||
return sanitizeTag(basename(process.cwd())) ?? null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
export async function extractTags(markdown: string): Promise<string[]> {
|
||||
const tags = new Set<string>(["plannotator"]);
|
||||
const projectName = detectProjectNameSync();
|
||||
if (projectName) tags.add(projectName);
|
||||
const stopWords = new Set([
|
||||
"the",
|
||||
"and",
|
||||
"for",
|
||||
"with",
|
||||
"this",
|
||||
"that",
|
||||
"from",
|
||||
"into",
|
||||
"plan",
|
||||
"implementation",
|
||||
"overview",
|
||||
"phase",
|
||||
"step",
|
||||
"steps",
|
||||
]);
|
||||
const h1Match = markdown.match(
|
||||
/^#\s+(?:Implementation\s+Plan:|Plan:)?\s*(.+)$/im,
|
||||
);
|
||||
if (h1Match) {
|
||||
h1Match[1]
|
||||
.toLowerCase()
|
||||
.replace(/[^\w\s-]/g, " ")
|
||||
.split(/\s+/)
|
||||
.filter((w) => w.length > 2 && !stopWords.has(w))
|
||||
.slice(0, 3)
|
||||
.forEach((w) => tags.add(w));
|
||||
}
|
||||
const seenLangs = new Set<string>();
|
||||
let langMatch: RegExpExecArray | null;
|
||||
const langRegex = /```(\w+)/g;
|
||||
while ((langMatch = langRegex.exec(markdown)) !== null) {
|
||||
const lang = langMatch[1];
|
||||
const n = lang.toLowerCase();
|
||||
if (
|
||||
!seenLangs.has(n) &&
|
||||
!["json", "yaml", "yml", "text", "txt", "markdown", "md"].includes(n)
|
||||
) {
|
||||
seenLangs.add(n);
|
||||
tags.add(n);
|
||||
}
|
||||
}
|
||||
return Array.from(tags).slice(0, 7);
|
||||
}
|
||||
|
||||
export async function saveToObsidian(
|
||||
config: ObsidianConfig,
|
||||
): Promise<IntegrationResult> {
|
||||
try {
|
||||
const { vaultPath, folder, plan } = config;
|
||||
if (!vaultPath?.trim()) {
|
||||
return { success: false, error: "Vault path is required" };
|
||||
}
|
||||
const normalizedVault = resolveUserPath(vaultPath);
|
||||
if (!existsSync(normalizedVault))
|
||||
return {
|
||||
success: false,
|
||||
error: `Vault path does not exist: ${normalizedVault}`,
|
||||
};
|
||||
if (!statSync(normalizedVault).isDirectory())
|
||||
return {
|
||||
success: false,
|
||||
error: `Vault path is not a directory: ${normalizedVault}`,
|
||||
};
|
||||
const folderName = folder.trim() || "plannotator";
|
||||
const targetFolder = join(normalizedVault, folderName);
|
||||
if (!existsSync(targetFolder)) mkdirSync(targetFolder, { recursive: true });
|
||||
const filename = generateFilename(
|
||||
plan,
|
||||
config.filenameFormat,
|
||||
config.filenameSeparator,
|
||||
);
|
||||
const filePath = join(targetFolder, filename);
|
||||
const tags = await extractTags(plan);
|
||||
const frontmatter = generateFrontmatter(tags);
|
||||
const content = `${frontmatter}\n\n[[Plannotator Plans]]\n\n${plan}`;
|
||||
writeFileSync(filePath, content);
|
||||
return { success: true, path: filePath };
|
||||
} catch (err) {
|
||||
return {
|
||||
success: false,
|
||||
error: err instanceof Error ? err.message : "Unknown error",
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
export async function saveToBear(
|
||||
config: BearConfig,
|
||||
): Promise<IntegrationResult> {
|
||||
try {
|
||||
const { plan, customTags, tagPosition = "append" } = config;
|
||||
const title = extractTitle(plan);
|
||||
const body = stripH1(plan);
|
||||
const tags = customTags?.trim() ? undefined : await extractTags(plan);
|
||||
const hashtags = buildHashtags(customTags, tags ?? []);
|
||||
const content = buildBearContent(body, hashtags, tagPosition);
|
||||
const url = `bear://x-callback-url/create?title=${encodeURIComponent(title)}&text=${encodeURIComponent(content)}&open_note=no`;
|
||||
spawn("open", [url], { stdio: "ignore" });
|
||||
return { success: true };
|
||||
} catch (err) {
|
||||
return {
|
||||
success: false,
|
||||
error: err instanceof Error ? err.message : "Unknown error",
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
export async function saveToOctarine(
|
||||
config: OctarineConfig,
|
||||
): Promise<IntegrationResult> {
|
||||
try {
|
||||
const { plan } = config;
|
||||
const workspace = config.workspace.trim();
|
||||
if (!workspace) return { success: false, error: "Workspace is required" };
|
||||
const folder = config.folder.trim() || "plannotator";
|
||||
const filename = generateFilename(plan);
|
||||
const base = filename.replace(/\.md$/, "");
|
||||
const path = folder ? `${folder}/${base}` : base;
|
||||
const tags = await extractTags(plan);
|
||||
const frontmatter = generateOctarineFrontmatter(tags);
|
||||
const content = `${frontmatter}\n\n${plan}`;
|
||||
const url = `octarine://create?path=${encodeURIComponent(path)}&content=${encodeURIComponent(content)}&workspace=${encodeURIComponent(workspace)}&fresh=true&openAfter=false`;
|
||||
spawn("open", [url], { stdio: "ignore" });
|
||||
return { success: true, path };
|
||||
} catch (err) {
|
||||
return {
|
||||
success: false,
|
||||
error: err instanceof Error ? err.message : "Unknown error",
|
||||
};
|
||||
}
|
||||
}
|
||||
109
extensions/plannotator/server/network.test.ts
Normal file
109
extensions/plannotator/server/network.test.ts
Normal file
@@ -0,0 +1,109 @@
|
||||
import { afterEach, describe, expect, test } from "bun:test";
|
||||
import { getServerHostname, getServerPort, isRemoteSession } from "./network";
|
||||
|
||||
const savedEnv: Record<string, string | undefined> = {};
|
||||
const envKeys = ["PLANNOTATOR_REMOTE", "PLANNOTATOR_PORT", "SSH_TTY", "SSH_CONNECTION"];
|
||||
|
||||
function clearEnv() {
|
||||
for (const key of envKeys) {
|
||||
savedEnv[key] = process.env[key];
|
||||
delete process.env[key];
|
||||
}
|
||||
}
|
||||
|
||||
afterEach(() => {
|
||||
for (const key of envKeys) {
|
||||
if (savedEnv[key] !== undefined) {
|
||||
process.env[key] = savedEnv[key];
|
||||
} else {
|
||||
delete process.env[key];
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
describe("pi remote detection", () => {
|
||||
test("false by default", () => {
|
||||
clearEnv();
|
||||
expect(isRemoteSession()).toBe(false);
|
||||
});
|
||||
|
||||
test("true when PLANNOTATOR_REMOTE=1", () => {
|
||||
clearEnv();
|
||||
process.env.PLANNOTATOR_REMOTE = "1";
|
||||
expect(isRemoteSession()).toBe(true);
|
||||
});
|
||||
|
||||
test("true when PLANNOTATOR_REMOTE=true", () => {
|
||||
clearEnv();
|
||||
process.env.PLANNOTATOR_REMOTE = "true";
|
||||
expect(isRemoteSession()).toBe(true);
|
||||
});
|
||||
|
||||
test("false when PLANNOTATOR_REMOTE=0", () => {
|
||||
clearEnv();
|
||||
process.env.PLANNOTATOR_REMOTE = "0";
|
||||
expect(isRemoteSession()).toBe(false);
|
||||
});
|
||||
|
||||
test("false when PLANNOTATOR_REMOTE=false", () => {
|
||||
clearEnv();
|
||||
process.env.PLANNOTATOR_REMOTE = "false";
|
||||
expect(isRemoteSession()).toBe(false);
|
||||
});
|
||||
|
||||
test("PLANNOTATOR_REMOTE=false overrides SSH_TTY", () => {
|
||||
clearEnv();
|
||||
process.env.PLANNOTATOR_REMOTE = "false";
|
||||
process.env.SSH_TTY = "/dev/pts/0";
|
||||
expect(isRemoteSession()).toBe(false);
|
||||
});
|
||||
|
||||
test("PLANNOTATOR_REMOTE=0 overrides SSH_CONNECTION", () => {
|
||||
clearEnv();
|
||||
process.env.PLANNOTATOR_REMOTE = "0";
|
||||
process.env.SSH_CONNECTION = "192.168.1.1 12345 192.168.1.2 22";
|
||||
expect(isRemoteSession()).toBe(false);
|
||||
});
|
||||
|
||||
test("true when SSH_TTY is set and env var is unset", () => {
|
||||
clearEnv();
|
||||
process.env.SSH_TTY = "/dev/pts/0";
|
||||
expect(isRemoteSession()).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe("pi port selection", () => {
|
||||
test("uses random local port when false overrides SSH", () => {
|
||||
clearEnv();
|
||||
process.env.PLANNOTATOR_REMOTE = "false";
|
||||
process.env.SSH_TTY = "/dev/pts/0";
|
||||
expect(getServerPort()).toEqual({ port: 0, portSource: "random" });
|
||||
});
|
||||
|
||||
test("uses default remote port when SSH is detected", () => {
|
||||
clearEnv();
|
||||
process.env.SSH_CONNECTION = "192.168.1.1 12345 192.168.1.2 22";
|
||||
expect(getServerPort()).toEqual({ port: 19432, portSource: "remote-default" });
|
||||
});
|
||||
|
||||
test("PLANNOTATOR_PORT still takes precedence", () => {
|
||||
clearEnv();
|
||||
process.env.PLANNOTATOR_REMOTE = "false";
|
||||
process.env.SSH_TTY = "/dev/pts/0";
|
||||
process.env.PLANNOTATOR_PORT = "9999";
|
||||
expect(getServerPort()).toEqual({ port: 9999, portSource: "env" });
|
||||
});
|
||||
});
|
||||
|
||||
describe("pi server hostname", () => {
|
||||
test("binds local sessions to loopback", () => {
|
||||
clearEnv();
|
||||
expect(getServerHostname()).toBe("127.0.0.1");
|
||||
});
|
||||
|
||||
test("binds remote sessions to all interfaces", () => {
|
||||
clearEnv();
|
||||
process.env.PLANNOTATOR_REMOTE = "1";
|
||||
expect(getServerHostname()).toBe("0.0.0.0");
|
||||
});
|
||||
});
|
||||
173
extensions/plannotator/server/network.ts
Normal file
173
extensions/plannotator/server/network.ts
Normal file
@@ -0,0 +1,173 @@
|
||||
/**
|
||||
* Network utilities — remote detection, port binding, browser opening.
|
||||
* isRemoteSession, getServerPort, listenOnPort, openBrowser
|
||||
*/
|
||||
|
||||
import { spawn } from "node:child_process";
|
||||
import type { Server } from "node:http";
|
||||
import { release } from "node:os";
|
||||
|
||||
const DEFAULT_REMOTE_PORT = 19432;
|
||||
const LOOPBACK_HOST = "127.0.0.1";
|
||||
|
||||
/**
|
||||
* Check if running in a remote session (SSH, devcontainer, etc.)
|
||||
* Honors PLANNOTATOR_REMOTE as a tri-state override, or detects SSH_TTY/SSH_CONNECTION.
|
||||
*/
|
||||
function getRemoteOverride(): boolean | null {
|
||||
const remote = process.env.PLANNOTATOR_REMOTE;
|
||||
if (remote === undefined) {
|
||||
return null;
|
||||
}
|
||||
|
||||
if (remote === "1" || remote?.toLowerCase() === "true") {
|
||||
return true;
|
||||
}
|
||||
|
||||
if (remote === "0" || remote?.toLowerCase() === "false") {
|
||||
return false;
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
export function isRemoteSession(): boolean {
|
||||
const remoteOverride = getRemoteOverride();
|
||||
if (remoteOverride !== null) {
|
||||
return remoteOverride;
|
||||
}
|
||||
// Legacy SSH detection
|
||||
if (process.env.SSH_TTY || process.env.SSH_CONNECTION) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the server port to use.
|
||||
* - PLANNOTATOR_PORT env var takes precedence
|
||||
* - Remote sessions default to 19432 (for port forwarding)
|
||||
* - Local sessions use random port
|
||||
* Returns { port, portSource } so caller can notify user if needed.
|
||||
*/
|
||||
export function getServerPort(): {
|
||||
port: number;
|
||||
portSource: "env" | "remote-default" | "random";
|
||||
} {
|
||||
const envPort = process.env.PLANNOTATOR_PORT;
|
||||
if (envPort) {
|
||||
const parsed = parseInt(envPort, 10);
|
||||
if (!Number.isNaN(parsed) && parsed > 0 && parsed < 65536) {
|
||||
return { port: parsed, portSource: "env" };
|
||||
}
|
||||
// Invalid port - fall back silently, caller can check env var themselves
|
||||
}
|
||||
if (isRemoteSession()) {
|
||||
return { port: DEFAULT_REMOTE_PORT, portSource: "remote-default" };
|
||||
}
|
||||
return { port: 0, portSource: "random" };
|
||||
}
|
||||
|
||||
export function getServerHostname(): string {
|
||||
return isRemoteSession() ? "0.0.0.0" : LOOPBACK_HOST;
|
||||
}
|
||||
|
||||
const MAX_RETRIES = 5;
|
||||
const RETRY_DELAY_MS = 500;
|
||||
|
||||
export async function listenOnPort(
|
||||
server: Server,
|
||||
): Promise<{ port: number; portSource: "env" | "remote-default" | "random" }> {
|
||||
const result = getServerPort();
|
||||
|
||||
for (let attempt = 1; attempt <= MAX_RETRIES; attempt++) {
|
||||
try {
|
||||
await new Promise<void>((resolve, reject) => {
|
||||
server.once("error", reject);
|
||||
server.listen(
|
||||
result.port,
|
||||
getServerHostname(),
|
||||
() => {
|
||||
server.removeListener("error", reject);
|
||||
resolve();
|
||||
},
|
||||
);
|
||||
});
|
||||
const addr = server.address() as { port: number };
|
||||
return { port: addr.port, portSource: result.portSource };
|
||||
} catch (err: unknown) {
|
||||
const isAddressInUse =
|
||||
err instanceof Error && err.message.includes("EADDRINUSE");
|
||||
if (isAddressInUse && attempt < MAX_RETRIES) {
|
||||
await new Promise((r) => setTimeout(r, RETRY_DELAY_MS));
|
||||
continue;
|
||||
}
|
||||
if (isAddressInUse) {
|
||||
const hint = isRemoteSession()
|
||||
? " (set PLANNOTATOR_PORT to use a different port)"
|
||||
: "";
|
||||
throw new Error(
|
||||
`Port ${result.port} in use after ${MAX_RETRIES} retries${hint}`,
|
||||
);
|
||||
}
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
// Unreachable, but satisfies TypeScript
|
||||
throw new Error("Failed to bind port");
|
||||
}
|
||||
|
||||
/**
|
||||
* Open URL in system browser (Node-compatible, no Bun $ dependency).
|
||||
* Honors PLANNOTATOR_BROWSER and BROWSER env vars.
|
||||
* Returns { opened: true } if browser was opened, { opened: false, isRemote: true, url } if remote session.
|
||||
*/
|
||||
export function openBrowser(url: string): {
|
||||
opened: boolean;
|
||||
isRemote?: boolean;
|
||||
url?: string;
|
||||
} {
|
||||
const browser = process.env.PLANNOTATOR_BROWSER || process.env.BROWSER;
|
||||
if (isRemoteSession() && !browser) {
|
||||
return { opened: false, isRemote: true, url };
|
||||
}
|
||||
|
||||
try {
|
||||
const platform = process.platform;
|
||||
const wsl =
|
||||
platform === "linux" && release().toLowerCase().includes("microsoft");
|
||||
|
||||
let cmd: string;
|
||||
let args: string[];
|
||||
|
||||
if (browser) {
|
||||
if (process.env.PLANNOTATOR_BROWSER && platform === "darwin") {
|
||||
cmd = "open";
|
||||
args = ["-a", browser, url];
|
||||
} else if (platform === "win32" || wsl) {
|
||||
cmd = "cmd.exe";
|
||||
args = ["/c", "start", "", browser, url];
|
||||
} else {
|
||||
cmd = browser;
|
||||
args = [url];
|
||||
}
|
||||
} else if (platform === "win32" || wsl) {
|
||||
cmd = "cmd.exe";
|
||||
args = ["/c", "start", "", url];
|
||||
} else if (platform === "darwin") {
|
||||
cmd = "open";
|
||||
args = [url];
|
||||
} else {
|
||||
cmd = "xdg-open";
|
||||
args = [url];
|
||||
}
|
||||
|
||||
const child = spawn(cmd, args, { detached: true, stdio: "ignore" });
|
||||
child.once("error", () => {});
|
||||
child.unref();
|
||||
return { opened: true };
|
||||
} catch {
|
||||
return { opened: false };
|
||||
}
|
||||
}
|
||||
124
extensions/plannotator/server/pr.ts
Normal file
124
extensions/plannotator/server/pr.ts
Normal file
@@ -0,0 +1,124 @@
|
||||
/**
|
||||
* PR/MR provider for Node.js runtime.
|
||||
* Node.js PRRuntime + bound dispatch functions from shared pr-provider.
|
||||
*/
|
||||
|
||||
import { spawn } from "node:child_process";
|
||||
|
||||
import {
|
||||
checkAuth as checkAuthCore,
|
||||
fetchPRContext as fetchPRContextCore,
|
||||
fetchPR as fetchPRCore,
|
||||
fetchPRFileContent as fetchPRFileContentCore,
|
||||
fetchPRViewedFiles as fetchPRViewedFilesCore,
|
||||
fetchPRStack as fetchPRStackCore,
|
||||
fetchPRList as fetchPRListCore,
|
||||
getUser as getUserCore,
|
||||
markPRFilesViewed as markPRFilesViewedCore,
|
||||
type PRMetadata,
|
||||
type PRRef,
|
||||
type PRReviewFileComment,
|
||||
type PRRuntime,
|
||||
type PRStackTree,
|
||||
type PRListItem,
|
||||
parsePRUrl as parsePRUrlCore,
|
||||
submitPRReview as submitPRReviewCore,
|
||||
} from "../generated/pr-provider.js";
|
||||
|
||||
const prRuntime: PRRuntime = {
|
||||
async runCommand(cmd, args) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const proc = spawn(cmd, args, { stdio: ["ignore", "pipe", "pipe"] });
|
||||
let stdout = "";
|
||||
let stderr = "";
|
||||
proc.stdout?.on("data", (chunk: Buffer) => {
|
||||
stdout += chunk.toString();
|
||||
});
|
||||
proc.stderr?.on("data", (chunk: Buffer) => {
|
||||
stderr += chunk.toString();
|
||||
});
|
||||
proc.on("error", reject);
|
||||
proc.on("close", (exitCode) => {
|
||||
resolve({ stdout, stderr, exitCode: exitCode ?? 1 });
|
||||
});
|
||||
});
|
||||
},
|
||||
async runCommandWithInput(cmd, args, input) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const proc = spawn(cmd, args, { stdio: ["pipe", "pipe", "pipe"] });
|
||||
let stdout = "";
|
||||
let stderr = "";
|
||||
proc.stdout?.on("data", (chunk: Buffer) => {
|
||||
stdout += chunk.toString();
|
||||
});
|
||||
proc.stderr?.on("data", (chunk: Buffer) => {
|
||||
stderr += chunk.toString();
|
||||
});
|
||||
proc.on("error", reject);
|
||||
proc.on("close", (exitCode) => {
|
||||
resolve({ stdout, stderr, exitCode: exitCode ?? 1 });
|
||||
});
|
||||
proc.stdin?.write(input);
|
||||
proc.stdin?.end();
|
||||
});
|
||||
},
|
||||
};
|
||||
|
||||
export const parsePRUrl = parsePRUrlCore;
|
||||
export function checkPRAuth(ref: PRRef) {
|
||||
return checkAuthCore(prRuntime, ref);
|
||||
}
|
||||
export function getPRUser(ref: PRRef) {
|
||||
return getUserCore(prRuntime, ref);
|
||||
}
|
||||
export function fetchPR(ref: PRRef) {
|
||||
return fetchPRCore(prRuntime, ref);
|
||||
}
|
||||
export function fetchPRContext(ref: PRRef) {
|
||||
return fetchPRContextCore(prRuntime, ref);
|
||||
}
|
||||
export function fetchPRFileContent(ref: PRRef, sha: string, filePath: string) {
|
||||
return fetchPRFileContentCore(prRuntime, ref, sha, filePath);
|
||||
}
|
||||
export function submitPRReview(
|
||||
ref: PRRef,
|
||||
headSha: string,
|
||||
action: "approve" | "comment",
|
||||
body: string,
|
||||
fileComments: PRReviewFileComment[],
|
||||
) {
|
||||
return submitPRReviewCore(
|
||||
prRuntime,
|
||||
ref,
|
||||
headSha,
|
||||
action,
|
||||
body,
|
||||
fileComments,
|
||||
);
|
||||
}
|
||||
|
||||
export function fetchPRViewedFiles(ref: PRRef): Promise<Record<string, boolean>> {
|
||||
return fetchPRViewedFilesCore(prRuntime, ref);
|
||||
}
|
||||
|
||||
export function markPRFilesViewed(
|
||||
ref: PRRef,
|
||||
prNodeId: string,
|
||||
filePaths: string[],
|
||||
viewed: boolean,
|
||||
): Promise<void> {
|
||||
return markPRFilesViewedCore(prRuntime, ref, prNodeId, filePaths, viewed);
|
||||
}
|
||||
|
||||
export function fetchPRStack(
|
||||
ref: PRRef,
|
||||
metadata: PRMetadata,
|
||||
): Promise<PRStackTree | null> {
|
||||
return fetchPRStackCore(prRuntime, ref, metadata);
|
||||
}
|
||||
|
||||
export function fetchPRList(
|
||||
ref: PRRef,
|
||||
): Promise<PRListItem[]> {
|
||||
return fetchPRListCore(prRuntime, ref);
|
||||
}
|
||||
64
extensions/plannotator/server/project.ts
Normal file
64
extensions/plannotator/server/project.ts
Normal file
@@ -0,0 +1,64 @@
|
||||
/**
|
||||
* Project detection — repo info, project name, remote URL parsing.
|
||||
* detectProjectName, getRepoInfo, parseRemoteUrl
|
||||
*/
|
||||
|
||||
import { execSync } from "node:child_process";
|
||||
import { basename } from "node:path";
|
||||
import { sanitizeTag } from "../generated/project.js";
|
||||
import { parseRemoteUrl, getDirName } from "../generated/repo.js";
|
||||
|
||||
/** Run a git command and return stdout (empty string on error). */
|
||||
function git(cmd: string): string {
|
||||
try {
|
||||
return execSync(`git ${cmd}`, {
|
||||
encoding: "utf-8",
|
||||
stdio: ["pipe", "pipe", "pipe"],
|
||||
}).trim();
|
||||
} catch {
|
||||
return "";
|
||||
}
|
||||
}
|
||||
|
||||
export function detectProjectName(): string {
|
||||
try {
|
||||
const toplevel = execSync("git rev-parse --show-toplevel", {
|
||||
encoding: "utf-8",
|
||||
stdio: ["pipe", "pipe", "pipe"],
|
||||
}).trim();
|
||||
const name = basename(toplevel);
|
||||
return sanitizeTag(name) ?? "_unknown";
|
||||
} catch {
|
||||
// Not a git repo — fall back to cwd
|
||||
}
|
||||
try {
|
||||
const name = basename(process.cwd());
|
||||
return sanitizeTag(name) ?? "_unknown";
|
||||
} catch {
|
||||
return "_unknown";
|
||||
}
|
||||
}
|
||||
|
||||
export function getRepoInfo(): { display: string; branch?: string } | null {
|
||||
const branch = git("rev-parse --abbrev-ref HEAD");
|
||||
const safeBranch = branch && branch !== "HEAD" ? branch : undefined;
|
||||
|
||||
const originUrl = git("remote get-url origin");
|
||||
const orgRepo = parseRemoteUrl(originUrl);
|
||||
if (orgRepo) {
|
||||
return { display: orgRepo, branch: safeBranch };
|
||||
}
|
||||
|
||||
const topLevel = git("rev-parse --show-toplevel");
|
||||
const repoName = getDirName(topLevel);
|
||||
if (repoName) {
|
||||
return { display: repoName, branch: safeBranch };
|
||||
}
|
||||
|
||||
const cwdName = getDirName(process.cwd());
|
||||
if (cwdName) {
|
||||
return { display: cwdName };
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
358
extensions/plannotator/server/reference.ts
Normal file
358
extensions/plannotator/server/reference.ts
Normal file
@@ -0,0 +1,358 @@
|
||||
/**
|
||||
* Document and reference handlers (Node.js equivalents of packages/server/reference-handlers.ts).
|
||||
* VaultNode, buildFileTree, walkMarkdownFiles, handleDocRequest,
|
||||
* detectObsidianVaults, handleObsidian*, handleFileBrowserRequest
|
||||
*/
|
||||
|
||||
import {
|
||||
existsSync,
|
||||
readdirSync,
|
||||
readFileSync,
|
||||
statSync,
|
||||
type Dirent,
|
||||
} from "node:fs";
|
||||
import type { ServerResponse } from "node:http";
|
||||
import { join, resolve as resolvePath } from "node:path";
|
||||
|
||||
import { json, parseBody } from "./helpers";
|
||||
import type { IncomingMessage } from "node:http";
|
||||
|
||||
import {
|
||||
type VaultNode,
|
||||
buildFileTree,
|
||||
FILE_BROWSER_EXCLUDED,
|
||||
} from "../generated/reference-common.js";
|
||||
import { detectObsidianVaults } from "../generated/integrations-common.js";
|
||||
import {
|
||||
isAbsoluteUserPath,
|
||||
isCodeFilePath,
|
||||
resolveCodeFile,
|
||||
resolveMarkdownFile,
|
||||
resolveUserPath,
|
||||
isWithinProjectRoot,
|
||||
warmFileListCache,
|
||||
} from "../generated/resolve-file.js";
|
||||
import { htmlToMarkdown } from "../generated/html-to-markdown.js";
|
||||
import { preloadFile } from "@pierre/diffs/ssr";
|
||||
|
||||
type Res = ServerResponse;
|
||||
|
||||
/** Recursively walk a directory collecting files by extension, skipping ignored dirs. */
|
||||
function walkMarkdownFiles(dir: string, root: string, results: string[], extensions: RegExp = /\.(mdx?|html?)$/i): void {
|
||||
let entries: Dirent[];
|
||||
try {
|
||||
entries = readdirSync(dir, { withFileTypes: true }) as Dirent[];
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory()) {
|
||||
if (FILE_BROWSER_EXCLUDED.includes(entry.name + "/")) continue;
|
||||
walkMarkdownFiles(join(dir, entry.name), root, results, extensions);
|
||||
} else if (entry.isFile() && extensions.test(entry.name)) {
|
||||
const relative = join(dir, entry.name)
|
||||
.slice(root.length + 1)
|
||||
.replace(/\\/g, "/");
|
||||
results.push(relative);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/** Serve a linked markdown document. Uses shared resolveMarkdownFile for parity with Bun server. */
|
||||
export async function handleDocRequest(res: Res, url: URL): Promise<void> {
|
||||
const requestedPath = url.searchParams.get("path");
|
||||
if (!requestedPath) {
|
||||
json(res, { error: "Missing path parameter" }, 400);
|
||||
return;
|
||||
}
|
||||
|
||||
// Side-channel: warm the code-file walk so /api/doc/exists POSTs land warm.
|
||||
void warmFileListCache(process.cwd(), "code");
|
||||
|
||||
// Try resolving relative to base directory first (used by annotate mode).
|
||||
// No isWithinProjectRoot check here — intentional, matches pre-existing
|
||||
// markdown behavior. The base param is set server-side by the annotate
|
||||
// server (see serverAnnotate.ts /api/doc route). The standalone HTML
|
||||
// block below (no base) retains its cwd-based containment check.
|
||||
const base = url.searchParams.get("base");
|
||||
const resolvedBase = base ? resolveUserPath(base) : null;
|
||||
if (
|
||||
resolvedBase &&
|
||||
!isAbsoluteUserPath(requestedPath) &&
|
||||
/\.(mdx?|html?)$/i.test(requestedPath)
|
||||
) {
|
||||
const fromBase = resolveUserPath(requestedPath, resolvedBase);
|
||||
try {
|
||||
if (existsSync(fromBase)) {
|
||||
const raw = readFileSync(fromBase, "utf-8");
|
||||
const isHtml = /\.html?$/i.test(requestedPath);
|
||||
const markdown = isHtml ? htmlToMarkdown(raw) : raw;
|
||||
json(res, { markdown, filepath: fromBase, isConverted: isHtml });
|
||||
return;
|
||||
}
|
||||
} catch {
|
||||
/* fall through to standard resolution */
|
||||
}
|
||||
}
|
||||
|
||||
// HTML files: resolve directly (not via resolveMarkdownFile which only handles .md/.mdx)
|
||||
const projectRoot = process.cwd();
|
||||
if (/\.html?$/i.test(requestedPath)) {
|
||||
const resolvedHtml = resolveUserPath(requestedPath, resolvedBase || projectRoot);
|
||||
if (!isWithinProjectRoot(resolvedHtml, projectRoot)) {
|
||||
json(res, { error: "Access denied: path is outside project root" }, 403);
|
||||
return;
|
||||
}
|
||||
try {
|
||||
if (existsSync(resolvedHtml)) {
|
||||
const html = readFileSync(resolvedHtml, "utf-8");
|
||||
json(res, { markdown: htmlToMarkdown(html), filepath: resolvedHtml, isConverted: true });
|
||||
return;
|
||||
}
|
||||
} catch { /* fall through to 404 */ }
|
||||
json(res, { error: `File not found: ${requestedPath}` }, 404);
|
||||
return;
|
||||
}
|
||||
|
||||
// Code files: try literal resolve first; on miss, fall back to smart resolver.
|
||||
if (isCodeFilePath(requestedPath)) {
|
||||
const literalPath = resolveUserPath(requestedPath, resolvedBase || projectRoot);
|
||||
const literalAllowed = resolvedBase || isWithinProjectRoot(literalPath, projectRoot);
|
||||
|
||||
let resolvedCode: string | null = null;
|
||||
if (literalAllowed && existsSync(literalPath)) {
|
||||
resolvedCode = literalPath;
|
||||
}
|
||||
|
||||
if (!resolvedCode) {
|
||||
const result = await resolveCodeFile(requestedPath, projectRoot);
|
||||
if (result.kind === "found") {
|
||||
resolvedCode = result.path;
|
||||
} else if (result.kind === "ambiguous") {
|
||||
const prefix = `${projectRoot}/`;
|
||||
const relative = result.matches.map((m: string) =>
|
||||
m.startsWith(prefix) ? m.slice(prefix.length) : m,
|
||||
);
|
||||
json(res, { error: `Ambiguous path '${requestedPath}'`, matches: relative }, 400);
|
||||
return;
|
||||
} else {
|
||||
json(res, { error: `File not found: ${requestedPath}` }, 404);
|
||||
return;
|
||||
}
|
||||
if (!isWithinProjectRoot(resolvedCode, projectRoot)) {
|
||||
json(res, { error: "Access denied: path is outside project root" }, 403);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
const stat = statSync(resolvedCode);
|
||||
if (stat.size > 2 * 1024 * 1024) {
|
||||
json(res, { error: "File too large (max 2MB)" }, 413);
|
||||
return;
|
||||
}
|
||||
const contents = readFileSync(resolvedCode, "utf-8");
|
||||
const displayName = resolvedCode.split("/").pop() || resolvedCode;
|
||||
let prerenderedHTML: string | undefined;
|
||||
try {
|
||||
const result = await preloadFile({
|
||||
file: { name: displayName, contents },
|
||||
options: { disableFileHeader: true },
|
||||
});
|
||||
prerenderedHTML = result.prerenderedHTML;
|
||||
} catch {
|
||||
// Fall back to client-side rendering
|
||||
}
|
||||
json(res, { codeFile: true, contents, filepath: resolvedCode, prerenderedHTML });
|
||||
return;
|
||||
} catch {
|
||||
json(res, { error: `File not found: ${requestedPath}` }, 404);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
const result = resolveMarkdownFile(requestedPath, projectRoot);
|
||||
|
||||
if (result.kind === "ambiguous") {
|
||||
json(
|
||||
res,
|
||||
{
|
||||
error: `Ambiguous filename '${result.input}': found ${result.matches.length} matches`,
|
||||
matches: result.matches,
|
||||
},
|
||||
400,
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
if (result.kind === "not_found" || result.kind === "unavailable") {
|
||||
json(res, { error: `File not found: ${result.input}` }, 404);
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const markdown = readFileSync(result.path, "utf-8");
|
||||
json(res, { markdown, filepath: result.path });
|
||||
} catch {
|
||||
json(res, { error: "Failed to read file" }, 500);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Batch existence check for code-file paths the renderer wants to linkify.
|
||||
* POST /api/doc/exists with { paths: string[] }.
|
||||
*
|
||||
* TODO(security): see packages/server/reference-handlers.ts handleDocExists —
|
||||
* both absolute paths in `paths[]` AND the `base` field are honored verbatim
|
||||
* with no project-root containment check, leaking file existence back to the
|
||||
* caller. Fix in lockstep with the Bun handler.
|
||||
*/
|
||||
export async function handleDocExistsRequest(res: Res, req: IncomingMessage): Promise<void> {
|
||||
const body = await parseBody(req);
|
||||
const paths = (body as { paths?: unknown }).paths;
|
||||
if (!Array.isArray(paths) || !paths.every((p) => typeof p === "string")) {
|
||||
json(res, { error: "Expected { paths: string[] }" }, 400);
|
||||
return;
|
||||
}
|
||||
if (paths.length > 500) {
|
||||
json(res, { error: "Too many paths (max 500)" }, 400);
|
||||
return;
|
||||
}
|
||||
const baseRaw = (body as { base?: unknown }).base;
|
||||
const baseDir = typeof baseRaw === "string" && baseRaw.length > 0
|
||||
? resolveUserPath(baseRaw)
|
||||
: undefined;
|
||||
|
||||
const projectRoot = process.cwd();
|
||||
const results: Record<
|
||||
string,
|
||||
| { status: "found"; resolved: string }
|
||||
| { status: "ambiguous"; matches: string[] }
|
||||
| { status: "missing" }
|
||||
| { status: "unavailable" }
|
||||
> = {};
|
||||
|
||||
await Promise.all(
|
||||
(paths as string[]).map(async (p) => {
|
||||
const r = await resolveCodeFile(p, projectRoot, baseDir);
|
||||
if (r.kind === "found") {
|
||||
results[p] = { status: "found", resolved: r.path };
|
||||
} else if (r.kind === "ambiguous") {
|
||||
const prefix = `${projectRoot}/`;
|
||||
results[p] = {
|
||||
status: "ambiguous",
|
||||
matches: r.matches.map((m: string) => (m.startsWith(prefix) ? m.slice(prefix.length) : m)),
|
||||
};
|
||||
} else if (r.kind === "unavailable") {
|
||||
results[p] = { status: "unavailable" };
|
||||
} else {
|
||||
results[p] = { status: "missing" };
|
||||
}
|
||||
}),
|
||||
);
|
||||
|
||||
json(res, { results });
|
||||
}
|
||||
|
||||
export function handleObsidianVaultsRequest(res: Res): void {
|
||||
json(res, { vaults: detectObsidianVaults() });
|
||||
}
|
||||
|
||||
export function handleObsidianFilesRequest(res: Res, url: URL): void {
|
||||
const vaultPath = url.searchParams.get("vaultPath");
|
||||
if (!vaultPath) {
|
||||
json(res, { error: "Missing vaultPath parameter" }, 400);
|
||||
return;
|
||||
}
|
||||
const resolvedVault = resolveUserPath(vaultPath);
|
||||
if (!existsSync(resolvedVault) || !statSync(resolvedVault).isDirectory()) {
|
||||
json(res, { error: "Invalid vault path" }, 400);
|
||||
return;
|
||||
}
|
||||
try {
|
||||
const files: string[] = [];
|
||||
walkMarkdownFiles(resolvedVault, resolvedVault, files, /\.mdx?$/i);
|
||||
files.sort();
|
||||
json(res, { tree: buildFileTree(files) });
|
||||
} catch {
|
||||
json(res, { error: "Failed to list vault files" }, 500);
|
||||
}
|
||||
}
|
||||
|
||||
export function handleObsidianDocRequest(res: Res, url: URL): void {
|
||||
const vaultPath = url.searchParams.get("vaultPath");
|
||||
const filePath = url.searchParams.get("path");
|
||||
if (!vaultPath || !filePath) {
|
||||
json(res, { error: "Missing vaultPath or path parameter" }, 400);
|
||||
return;
|
||||
}
|
||||
if (!/\.mdx?$/i.test(filePath)) {
|
||||
json(res, { error: "Only markdown files are supported" }, 400);
|
||||
return;
|
||||
}
|
||||
const resolvedVault = resolveUserPath(vaultPath);
|
||||
let resolvedFile = resolvePath(resolvedVault, filePath);
|
||||
|
||||
// Bare filename search within vault
|
||||
if (!existsSync(resolvedFile) && !filePath.includes("/")) {
|
||||
const files: string[] = [];
|
||||
walkMarkdownFiles(resolvedVault, resolvedVault, files, /\.mdx?$/i);
|
||||
const matches = files.filter(
|
||||
(f) => f.split("/").pop()!.toLowerCase() === filePath.toLowerCase(),
|
||||
);
|
||||
if (matches.length === 1) {
|
||||
resolvedFile = resolvePath(resolvedVault, matches[0]);
|
||||
} else if (matches.length > 1) {
|
||||
json(
|
||||
res,
|
||||
{
|
||||
error: `Ambiguous filename '${filePath}': found ${matches.length} matches`,
|
||||
matches,
|
||||
},
|
||||
400,
|
||||
);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Security: must be within vault
|
||||
if (
|
||||
!resolvedFile.startsWith(resolvedVault + "/") &&
|
||||
resolvedFile !== resolvedVault
|
||||
) {
|
||||
json(res, { error: "Access denied: path is outside vault" }, 403);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!existsSync(resolvedFile)) {
|
||||
json(res, { error: `File not found: ${filePath}` }, 404);
|
||||
return;
|
||||
}
|
||||
try {
|
||||
const markdown = readFileSync(resolvedFile, "utf-8");
|
||||
json(res, { markdown, filepath: resolvedFile });
|
||||
} catch {
|
||||
json(res, { error: "Failed to read file" }, 500);
|
||||
}
|
||||
}
|
||||
|
||||
export function handleFileBrowserRequest(res: Res, url: URL): void {
|
||||
const dirPath = url.searchParams.get("dirPath");
|
||||
if (!dirPath) {
|
||||
json(res, { error: "Missing dirPath parameter" }, 400);
|
||||
return;
|
||||
}
|
||||
const resolvedDir = resolveUserPath(dirPath);
|
||||
if (!existsSync(resolvedDir) || !statSync(resolvedDir).isDirectory()) {
|
||||
json(res, { error: "Invalid directory path" }, 400);
|
||||
return;
|
||||
}
|
||||
try {
|
||||
const files: string[] = [];
|
||||
walkMarkdownFiles(resolvedDir, resolvedDir, files);
|
||||
files.sort();
|
||||
json(res, { tree: buildFileTree(files) });
|
||||
} catch {
|
||||
json(res, { error: "Failed to list directory files" }, 500);
|
||||
}
|
||||
}
|
||||
181
extensions/plannotator/server/serverAnnotate.ts
Normal file
181
extensions/plannotator/server/serverAnnotate.ts
Normal file
@@ -0,0 +1,181 @@
|
||||
import { createServer } from "node:http";
|
||||
import { dirname, resolve as resolvePath } from "node:path";
|
||||
|
||||
import { contentHash, deleteDraft } from "../generated/draft.js";
|
||||
import { saveConfig, detectGitUser, getServerConfig } from "../generated/config.js";
|
||||
|
||||
import {
|
||||
handleDraftRequest,
|
||||
handleFavicon,
|
||||
handleImageRequest,
|
||||
handleUploadRequest,
|
||||
} from "./handlers.js";
|
||||
import { html, json, parseBody, requestUrl } from "./helpers.js";
|
||||
|
||||
import { listenOnPort } from "./network.js";
|
||||
|
||||
import { getRepoInfo } from "./project.js";
|
||||
import {
|
||||
handleDocRequest,
|
||||
handleDocExistsRequest,
|
||||
handleFileBrowserRequest,
|
||||
handleObsidianVaultsRequest,
|
||||
handleObsidianFilesRequest,
|
||||
handleObsidianDocRequest,
|
||||
} from "./reference.js";
|
||||
import { warmFileListCache } from "../generated/resolve-file.js";
|
||||
import { createExternalAnnotationHandler } from "./external-annotations.js";
|
||||
|
||||
export interface AnnotateServerResult {
|
||||
port: number;
|
||||
portSource: "env" | "remote-default" | "random";
|
||||
url: string;
|
||||
waitForDecision: () => Promise<{ feedback: string; annotations: unknown[]; exit?: boolean; approved?: boolean }>;
|
||||
stop: () => void;
|
||||
}
|
||||
|
||||
export async function startAnnotateServer(options: {
|
||||
markdown: string;
|
||||
filePath: string;
|
||||
htmlContent: string;
|
||||
origin?: string;
|
||||
mode?: string;
|
||||
folderPath?: string;
|
||||
sharingEnabled?: boolean;
|
||||
shareBaseUrl?: string;
|
||||
pasteApiUrl?: string;
|
||||
sourceInfo?: string;
|
||||
sourceConverted?: boolean;
|
||||
gate?: boolean;
|
||||
}): Promise<AnnotateServerResult> {
|
||||
// Side-channel pre-warm so /api/doc/exists POSTs land on warm cache.
|
||||
void warmFileListCache(process.cwd(), "code");
|
||||
const gitUser = detectGitUser();
|
||||
const sharingEnabled =
|
||||
options.sharingEnabled ?? process.env.PLANNOTATOR_SHARE !== "disabled";
|
||||
const shareBaseUrl =
|
||||
(options.shareBaseUrl ?? process.env.PLANNOTATOR_SHARE_URL) || undefined;
|
||||
const pasteApiUrl =
|
||||
(options.pasteApiUrl ?? process.env.PLANNOTATOR_PASTE_URL) || undefined;
|
||||
|
||||
let resolveDecision!: (result: {
|
||||
feedback: string;
|
||||
annotations: unknown[];
|
||||
exit?: boolean;
|
||||
approved?: boolean;
|
||||
}) => void;
|
||||
const decisionPromise = new Promise<{
|
||||
feedback: string;
|
||||
annotations: unknown[];
|
||||
exit?: boolean;
|
||||
approved?: boolean;
|
||||
}>((r) => {
|
||||
resolveDecision = r;
|
||||
});
|
||||
|
||||
// Folder annotation has no stable markdown body, so key drafts by folder path instead.
|
||||
const draftSource =
|
||||
options.mode === "annotate-folder" && options.folderPath
|
||||
? `folder:${resolvePath(options.folderPath)}`
|
||||
: options.markdown;
|
||||
const draftKey = contentHash(draftSource);
|
||||
|
||||
// Detect repo info (cached for this session)
|
||||
const repoInfo = getRepoInfo();
|
||||
|
||||
const externalAnnotations = createExternalAnnotationHandler("plan");
|
||||
|
||||
const server = createServer(async (req, res) => {
|
||||
const url = requestUrl(req);
|
||||
|
||||
if (await externalAnnotations.handle(req, res, url)) return;
|
||||
|
||||
if (url.pathname === "/api/plan" && req.method === "GET") {
|
||||
json(res, {
|
||||
plan: options.markdown,
|
||||
origin: options.origin ?? "pi",
|
||||
mode: options.mode || "annotate",
|
||||
filePath: options.filePath,
|
||||
sourceInfo: options.sourceInfo,
|
||||
sourceConverted: options.sourceConverted ?? false,
|
||||
gate: options.gate ?? false,
|
||||
sharingEnabled,
|
||||
shareBaseUrl,
|
||||
pasteApiUrl,
|
||||
repoInfo,
|
||||
projectRoot: options.folderPath || process.cwd(),
|
||||
serverConfig: getServerConfig(gitUser),
|
||||
});
|
||||
} else if (url.pathname === "/api/config" && req.method === "POST") {
|
||||
try {
|
||||
const body = (await parseBody(req)) as { displayName?: string; diffOptions?: Record<string, unknown>; conventionalComments?: boolean };
|
||||
const toSave: Record<string, unknown> = {};
|
||||
if (body.displayName !== undefined) toSave.displayName = body.displayName;
|
||||
if (body.diffOptions !== undefined) toSave.diffOptions = body.diffOptions;
|
||||
if (body.conventionalComments !== undefined) toSave.conventionalComments = body.conventionalComments;
|
||||
if (Object.keys(toSave).length > 0) saveConfig(toSave as Parameters<typeof saveConfig>[0]);
|
||||
json(res, { ok: true });
|
||||
} catch {
|
||||
json(res, { error: "Invalid request" }, 400);
|
||||
}
|
||||
} else if (url.pathname === "/api/image") {
|
||||
handleImageRequest(res, url);
|
||||
} else if (url.pathname === "/api/upload" && req.method === "POST") {
|
||||
await handleUploadRequest(req, res);
|
||||
} else if (url.pathname === "/api/draft") {
|
||||
await handleDraftRequest(req, res, draftKey);
|
||||
} else if (url.pathname === "/api/doc" && req.method === "GET") {
|
||||
// Inject source file's directory as base for relative path resolution.
|
||||
// Skip for URL annotations — there's no local directory to resolve against.
|
||||
if (!url.searchParams.has("base") && options.filePath && !/^https?:\/\//i.test(options.filePath)) {
|
||||
url.searchParams.set("base", dirname(resolvePath(options.filePath)));
|
||||
}
|
||||
await handleDocRequest(res, url);
|
||||
} else if (url.pathname === "/api/doc/exists" && req.method === "POST") {
|
||||
await handleDocExistsRequest(res, req);
|
||||
} else if (url.pathname === "/api/obsidian/vaults") {
|
||||
handleObsidianVaultsRequest(res);
|
||||
} else if (url.pathname === "/api/reference/obsidian/files" && req.method === "GET") {
|
||||
handleObsidianFilesRequest(res, url);
|
||||
} else if (url.pathname === "/api/reference/obsidian/doc" && req.method === "GET") {
|
||||
handleObsidianDocRequest(res, url);
|
||||
} else if (url.pathname === "/api/reference/files" && req.method === "GET") {
|
||||
handleFileBrowserRequest(res, url);
|
||||
} else if (url.pathname === "/favicon.svg") {
|
||||
handleFavicon(res);
|
||||
} else if (url.pathname === "/api/exit" && req.method === "POST") {
|
||||
deleteDraft(draftKey);
|
||||
resolveDecision({ feedback: "", annotations: [], exit: true });
|
||||
json(res, { ok: true });
|
||||
} else if (url.pathname === "/api/approve" && req.method === "POST") {
|
||||
deleteDraft(draftKey);
|
||||
resolveDecision({ feedback: "", annotations: [], approved: true });
|
||||
json(res, { ok: true });
|
||||
} else if (url.pathname === "/api/feedback" && req.method === "POST") {
|
||||
try {
|
||||
const body = await parseBody(req);
|
||||
deleteDraft(draftKey);
|
||||
resolveDecision({
|
||||
feedback: (body.feedback as string) || "",
|
||||
annotations: (body.annotations as unknown[]) || [],
|
||||
});
|
||||
json(res, { ok: true });
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : "Failed to process feedback";
|
||||
json(res, { error: message }, 500);
|
||||
}
|
||||
} else {
|
||||
html(res, options.htmlContent);
|
||||
}
|
||||
});
|
||||
|
||||
const { port, portSource } = await listenOnPort(server);
|
||||
|
||||
return {
|
||||
port,
|
||||
portSource,
|
||||
url: `http://localhost:${port}`,
|
||||
waitForDecision: () => decisionPromise,
|
||||
stop: () => server.close(),
|
||||
};
|
||||
}
|
||||
480
extensions/plannotator/server/serverPlan.ts
Normal file
480
extensions/plannotator/server/serverPlan.ts
Normal file
@@ -0,0 +1,480 @@
|
||||
import { randomUUID } from "node:crypto";
|
||||
import { createServer } from "node:http";
|
||||
|
||||
import { contentHash, deleteDraft } from "../generated/draft.js";
|
||||
import {
|
||||
type ArchivedPlan,
|
||||
generateSlug,
|
||||
getPlanVersion,
|
||||
getPlanVersionPath,
|
||||
getVersionCount,
|
||||
listArchivedPlans,
|
||||
listVersions,
|
||||
readArchivedPlan,
|
||||
saveAnnotations,
|
||||
saveFinalSnapshot,
|
||||
saveToHistory,
|
||||
} from "../generated/storage.js";
|
||||
import { createEditorAnnotationHandler } from "./annotations.js";
|
||||
import { createExternalAnnotationHandler } from "./external-annotations.js";
|
||||
import {
|
||||
handleDraftRequest,
|
||||
handleFavicon,
|
||||
handleImageRequest,
|
||||
handleUploadRequest,
|
||||
} from "./handlers.js";
|
||||
import { html, json, parseBody, requestUrl } from "./helpers.js";
|
||||
import { openEditorDiff } from "./ide.js";
|
||||
import {
|
||||
type BearConfig,
|
||||
type IntegrationResult,
|
||||
type ObsidianConfig,
|
||||
type OctarineConfig,
|
||||
saveToBear,
|
||||
saveToObsidian,
|
||||
saveToOctarine,
|
||||
} from "./integrations.js";
|
||||
import { listenOnPort } from "./network.js";
|
||||
|
||||
import { saveConfig, detectGitUser, getServerConfig } from "../generated/config.js";
|
||||
import { detectProjectName, getRepoInfo } from "./project.js";
|
||||
import {
|
||||
handleDocRequest,
|
||||
handleDocExistsRequest,
|
||||
handleFileBrowserRequest,
|
||||
handleObsidianDocRequest,
|
||||
handleObsidianFilesRequest,
|
||||
handleObsidianVaultsRequest,
|
||||
} from "./reference.js";
|
||||
import { warmFileListCache } from "../generated/resolve-file.js";
|
||||
|
||||
export interface PlanReviewDecision {
|
||||
approved: boolean;
|
||||
feedback?: string;
|
||||
savedPath?: string;
|
||||
agentSwitch?: string;
|
||||
permissionMode?: string;
|
||||
}
|
||||
|
||||
export interface PlanServerResult {
|
||||
reviewId: string;
|
||||
port: number;
|
||||
portSource: "env" | "remote-default" | "random";
|
||||
url: string;
|
||||
waitForDecision: () => Promise<PlanReviewDecision>;
|
||||
onDecision: (listener: (result: PlanReviewDecision) => void | Promise<void>) => () => void;
|
||||
waitForDone?: () => Promise<void>;
|
||||
stop: () => void;
|
||||
}
|
||||
|
||||
export async function startPlanReviewServer(options: {
|
||||
plan: string;
|
||||
htmlContent: string;
|
||||
origin?: string;
|
||||
permissionMode?: string;
|
||||
sharingEnabled?: boolean;
|
||||
shareBaseUrl?: string;
|
||||
pasteApiUrl?: string;
|
||||
mode?: "archive";
|
||||
customPlanPath?: string | null;
|
||||
}): Promise<PlanServerResult> {
|
||||
// Side-channel pre-warm so /api/doc/exists POSTs land on warm cache.
|
||||
void warmFileListCache(process.cwd(), "code");
|
||||
const gitUser = detectGitUser();
|
||||
const sharingEnabled =
|
||||
options.sharingEnabled ?? process.env.PLANNOTATOR_SHARE !== "disabled";
|
||||
const shareBaseUrl =
|
||||
(options.shareBaseUrl ?? process.env.PLANNOTATOR_SHARE_URL) || undefined;
|
||||
const pasteApiUrl =
|
||||
(options.pasteApiUrl ?? process.env.PLANNOTATOR_PASTE_URL) || undefined;
|
||||
|
||||
// --- Archive mode setup ---
|
||||
let archivePlans: ArchivedPlan[] = [];
|
||||
let initialArchivePlan = "";
|
||||
let resolveDone: (() => void) | undefined;
|
||||
let donePromise: Promise<void> | undefined;
|
||||
|
||||
if (options.mode === "archive") {
|
||||
archivePlans = listArchivedPlans(options.customPlanPath ?? undefined);
|
||||
initialArchivePlan =
|
||||
archivePlans.length > 0
|
||||
? (readArchivedPlan(
|
||||
archivePlans[0].filename,
|
||||
options.customPlanPath ?? undefined,
|
||||
) ?? "")
|
||||
: "";
|
||||
donePromise = new Promise<void>((resolve) => {
|
||||
resolveDone = resolve;
|
||||
});
|
||||
}
|
||||
|
||||
// --- Plan review mode setup (skip in archive mode) ---
|
||||
const repoInfo = options.mode !== "archive" ? getRepoInfo() : null;
|
||||
const slug = options.mode !== "archive" ? generateSlug(options.plan) : "";
|
||||
const project = options.mode !== "archive" ? detectProjectName() : "";
|
||||
const historyResult =
|
||||
options.mode !== "archive"
|
||||
? saveToHistory(project, slug, options.plan)
|
||||
: { version: 0, path: "", isNew: false };
|
||||
const previousPlan =
|
||||
options.mode !== "archive" && historyResult.version > 1
|
||||
? getPlanVersion(project, slug, historyResult.version - 1)
|
||||
: null;
|
||||
const versionInfo =
|
||||
options.mode !== "archive"
|
||||
? {
|
||||
version: historyResult.version,
|
||||
totalVersions: getVersionCount(project, slug),
|
||||
project,
|
||||
}
|
||||
: null;
|
||||
|
||||
const reviewId = randomUUID();
|
||||
let resolveDecision!: (result: PlanReviewDecision) => void;
|
||||
const decisionListeners = new Set<(result: PlanReviewDecision) => void | Promise<void>>();
|
||||
let decisionSettled = false;
|
||||
const decisionPromise = new Promise<PlanReviewDecision>((r) => {
|
||||
resolveDecision = r;
|
||||
});
|
||||
const publishDecision = (result: PlanReviewDecision): boolean => {
|
||||
if (decisionSettled) return false;
|
||||
decisionSettled = true;
|
||||
resolveDecision(result);
|
||||
for (const listener of decisionListeners) {
|
||||
Promise.resolve(listener(result)).catch((error) => {
|
||||
console.error("[Plan Review] Decision listener failed:", error);
|
||||
});
|
||||
}
|
||||
return true;
|
||||
};
|
||||
|
||||
// Draft key for annotation persistence
|
||||
const draftKey = options.mode !== "archive" ? contentHash(options.plan) : "";
|
||||
|
||||
// Editor annotations (in-memory, VS Code integration — skip in archive mode)
|
||||
const editorAnnotations = options.mode !== "archive" ? createEditorAnnotationHandler() : null;
|
||||
const externalAnnotations = options.mode !== "archive" ? createExternalAnnotationHandler("plan") : null;
|
||||
|
||||
// Lazy cache for in-session archive tab
|
||||
let cachedArchivePlans: ArchivedPlan[] | null = null;
|
||||
|
||||
const server = createServer(async (req, res) => {
|
||||
const url = requestUrl(req);
|
||||
|
||||
if (url.pathname === "/api/done" && req.method === "POST") {
|
||||
resolveDone?.();
|
||||
json(res, { ok: true });
|
||||
} else if (url.pathname === "/api/archive/plans" && req.method === "GET") {
|
||||
const customPath = url.searchParams.get("customPath") || undefined;
|
||||
if (!cachedArchivePlans)
|
||||
cachedArchivePlans = listArchivedPlans(customPath);
|
||||
json(res, { plans: cachedArchivePlans });
|
||||
} else if (url.pathname === "/api/archive/plan" && req.method === "GET") {
|
||||
const filename = url.searchParams.get("filename");
|
||||
const customPath = url.searchParams.get("customPath") || undefined;
|
||||
if (!filename) {
|
||||
json(res, { error: "Missing filename" }, 400);
|
||||
return;
|
||||
}
|
||||
const markdown = readArchivedPlan(filename, customPath);
|
||||
if (!markdown) {
|
||||
json(res, { error: "Not found" }, 404);
|
||||
return;
|
||||
}
|
||||
json(res, { markdown, filepath: filename });
|
||||
} else if (url.pathname === "/api/plan/version") {
|
||||
const vParam = url.searchParams.get("v");
|
||||
if (!vParam) {
|
||||
json(res, { error: "Missing v parameter" }, 400);
|
||||
return;
|
||||
}
|
||||
const v = parseInt(vParam, 10);
|
||||
if (Number.isNaN(v) || v < 1) {
|
||||
json(res, { error: "Invalid version number" }, 400);
|
||||
return;
|
||||
}
|
||||
const content = getPlanVersion(project, slug, v);
|
||||
if (content === null) {
|
||||
json(res, { error: "Version not found" }, 404);
|
||||
return;
|
||||
}
|
||||
json(res, { plan: content, version: v });
|
||||
} else if (url.pathname === "/api/plan/versions") {
|
||||
json(res, { project, slug, versions: listVersions(project, slug) });
|
||||
} else if (url.pathname === "/api/plan") {
|
||||
if (options.mode === "archive") {
|
||||
json(res, {
|
||||
plan: initialArchivePlan,
|
||||
origin: options.origin ?? "pi",
|
||||
mode: "archive",
|
||||
archivePlans,
|
||||
sharingEnabled,
|
||||
shareBaseUrl,
|
||||
serverConfig: getServerConfig(gitUser),
|
||||
});
|
||||
} else {
|
||||
json(res, {
|
||||
plan: options.plan,
|
||||
origin: options.origin ?? "pi",
|
||||
permissionMode: options.permissionMode,
|
||||
previousPlan,
|
||||
versionInfo,
|
||||
sharingEnabled,
|
||||
shareBaseUrl,
|
||||
pasteApiUrl,
|
||||
repoInfo,
|
||||
projectRoot: process.cwd(),
|
||||
serverConfig: getServerConfig(gitUser),
|
||||
});
|
||||
}
|
||||
} else if (url.pathname === "/api/config" && req.method === "POST") {
|
||||
try {
|
||||
const body = (await parseBody(req)) as { displayName?: string; diffOptions?: Record<string, unknown>; conventionalComments?: boolean };
|
||||
const toSave: Record<string, unknown> = {};
|
||||
if (body.displayName !== undefined) toSave.displayName = body.displayName;
|
||||
if (body.diffOptions !== undefined) toSave.diffOptions = body.diffOptions;
|
||||
if (body.conventionalComments !== undefined) toSave.conventionalComments = body.conventionalComments;
|
||||
if (Object.keys(toSave).length > 0) saveConfig(toSave as Parameters<typeof saveConfig>[0]);
|
||||
json(res, { ok: true });
|
||||
} catch {
|
||||
json(res, { error: "Invalid request" }, 400);
|
||||
}
|
||||
} else if (url.pathname === "/api/image") {
|
||||
handleImageRequest(res, url);
|
||||
} else if (url.pathname === "/api/upload" && req.method === "POST") {
|
||||
await handleUploadRequest(req, res);
|
||||
} else if (url.pathname === "/api/draft") {
|
||||
await handleDraftRequest(req, res, draftKey);
|
||||
} else if (editorAnnotations && (await editorAnnotations.handle(req, res, url))) {
|
||||
return;
|
||||
} else if (externalAnnotations && (await externalAnnotations.handle(req, res, url))) {
|
||||
return;
|
||||
} else if (url.pathname === "/api/doc" && req.method === "GET") {
|
||||
await handleDocRequest(res, url);
|
||||
} else if (url.pathname === "/api/doc/exists" && req.method === "POST") {
|
||||
await handleDocExistsRequest(res, req);
|
||||
} else if (url.pathname === "/api/obsidian/vaults") {
|
||||
handleObsidianVaultsRequest(res);
|
||||
} else if (url.pathname === "/api/reference/obsidian/files" && req.method === "GET") {
|
||||
handleObsidianFilesRequest(res, url);
|
||||
} else if (url.pathname === "/api/reference/obsidian/doc" && req.method === "GET") {
|
||||
handleObsidianDocRequest(res, url);
|
||||
} else if (url.pathname === "/api/reference/files" && req.method === "GET") {
|
||||
handleFileBrowserRequest(res, url);
|
||||
} else if (
|
||||
url.pathname === "/api/plan/vscode-diff" &&
|
||||
req.method === "POST"
|
||||
) {
|
||||
try {
|
||||
const body = await parseBody(req);
|
||||
const baseVersion = body.baseVersion as number;
|
||||
if (!baseVersion) {
|
||||
json(res, { error: "Missing baseVersion" }, 400);
|
||||
return;
|
||||
}
|
||||
const basePath = getPlanVersionPath(project, slug, baseVersion);
|
||||
if (!basePath) {
|
||||
json(res, { error: `Version ${baseVersion} not found` }, 404);
|
||||
return;
|
||||
}
|
||||
const result = await openEditorDiff(basePath, historyResult.path);
|
||||
if ("error" in result) {
|
||||
json(res, { error: result.error }, 500);
|
||||
return;
|
||||
}
|
||||
json(res, { ok: true });
|
||||
} catch (err) {
|
||||
json(
|
||||
res,
|
||||
{
|
||||
error:
|
||||
err instanceof Error
|
||||
? err.message
|
||||
: "Failed to open VS Code diff",
|
||||
},
|
||||
500,
|
||||
);
|
||||
}
|
||||
} else if (url.pathname === "/api/agents" && req.method === "GET") {
|
||||
json(res, { agents: [] });
|
||||
} else if (url.pathname === "/favicon.svg") {
|
||||
handleFavicon(res);
|
||||
} else if (url.pathname === "/api/save-notes" && req.method === "POST") {
|
||||
const results: {
|
||||
obsidian?: IntegrationResult;
|
||||
bear?: IntegrationResult;
|
||||
octarine?: IntegrationResult;
|
||||
} = {};
|
||||
try {
|
||||
const body = await parseBody(req);
|
||||
const promises: Promise<void>[] = [];
|
||||
const obsConfig = body.obsidian as ObsidianConfig | undefined;
|
||||
const bearConfig = body.bear as BearConfig | undefined;
|
||||
const octConfig = body.octarine as OctarineConfig | undefined;
|
||||
if (obsConfig?.vaultPath && obsConfig?.plan) {
|
||||
promises.push(
|
||||
saveToObsidian(obsConfig).then((r) => {
|
||||
results.obsidian = r;
|
||||
}),
|
||||
);
|
||||
}
|
||||
if (bearConfig?.plan) {
|
||||
promises.push(
|
||||
saveToBear(bearConfig).then((r) => {
|
||||
results.bear = r;
|
||||
}),
|
||||
);
|
||||
}
|
||||
if (octConfig?.plan && octConfig?.workspace) {
|
||||
promises.push(
|
||||
saveToOctarine(octConfig).then((r) => {
|
||||
results.octarine = r;
|
||||
}),
|
||||
);
|
||||
}
|
||||
await Promise.allSettled(promises);
|
||||
for (const [name, result] of Object.entries(results)) {
|
||||
if (!result?.success && result)
|
||||
console.error(`[${name}] Save failed: ${result.error}`);
|
||||
}
|
||||
} catch (err) {
|
||||
console.error(`[Save Notes] Error:`, err);
|
||||
json(res, { error: "Save failed" }, 500);
|
||||
return;
|
||||
}
|
||||
json(res, { ok: true, results });
|
||||
} else if (url.pathname === "/api/approve" && req.method === "POST") {
|
||||
if (decisionSettled) {
|
||||
json(res, { ok: true, duplicate: true });
|
||||
return;
|
||||
}
|
||||
let feedback: string | undefined;
|
||||
let agentSwitch: string | undefined;
|
||||
let requestedPermissionMode: string | undefined;
|
||||
let planSaveEnabled = true;
|
||||
let planSaveCustomPath: string | undefined;
|
||||
try {
|
||||
const body = await parseBody(req);
|
||||
if (body.feedback) feedback = body.feedback as string;
|
||||
if (body.agentSwitch) agentSwitch = body.agentSwitch as string;
|
||||
if (body.permissionMode)
|
||||
requestedPermissionMode = body.permissionMode as string;
|
||||
if (body.planSave !== undefined) {
|
||||
const ps = body.planSave as { enabled: boolean; customPath?: string };
|
||||
planSaveEnabled = ps.enabled;
|
||||
planSaveCustomPath = ps.customPath;
|
||||
}
|
||||
// Run note integrations in parallel
|
||||
const integrationResults: Record<string, IntegrationResult> = {};
|
||||
const integrationPromises: Promise<void>[] = [];
|
||||
const obsConfig = body.obsidian as ObsidianConfig | undefined;
|
||||
const bearConfig = body.bear as BearConfig | undefined;
|
||||
const octConfig = body.octarine as OctarineConfig | undefined;
|
||||
if (obsConfig?.vaultPath && obsConfig?.plan) {
|
||||
integrationPromises.push(
|
||||
saveToObsidian(obsConfig).then((r) => {
|
||||
integrationResults.obsidian = r;
|
||||
}),
|
||||
);
|
||||
}
|
||||
if (bearConfig?.plan) {
|
||||
integrationPromises.push(
|
||||
saveToBear(bearConfig).then((r) => {
|
||||
integrationResults.bear = r;
|
||||
}),
|
||||
);
|
||||
}
|
||||
if (octConfig?.plan && octConfig?.workspace) {
|
||||
integrationPromises.push(
|
||||
saveToOctarine(octConfig).then((r) => {
|
||||
integrationResults.octarine = r;
|
||||
}),
|
||||
);
|
||||
}
|
||||
await Promise.allSettled(integrationPromises);
|
||||
for (const [name, result] of Object.entries(integrationResults)) {
|
||||
if (!result?.success && result)
|
||||
console.error(`[${name}] Save failed: ${result.error}`);
|
||||
}
|
||||
} catch (err) {
|
||||
console.error(`[Integration] Error:`, err);
|
||||
}
|
||||
// Save annotations and final snapshot
|
||||
let savedPath: string | undefined;
|
||||
if (planSaveEnabled) {
|
||||
const annotations = feedback || "";
|
||||
if (annotations) saveAnnotations(slug, annotations, planSaveCustomPath);
|
||||
savedPath = saveFinalSnapshot(
|
||||
slug,
|
||||
"approved",
|
||||
options.plan,
|
||||
annotations,
|
||||
planSaveCustomPath,
|
||||
);
|
||||
}
|
||||
deleteDraft(draftKey);
|
||||
const effectivePermissionMode = requestedPermissionMode || options.permissionMode;
|
||||
publishDecision({
|
||||
approved: true,
|
||||
feedback,
|
||||
savedPath,
|
||||
agentSwitch,
|
||||
permissionMode: effectivePermissionMode,
|
||||
});
|
||||
json(res, { ok: true, savedPath });
|
||||
} else if (url.pathname === "/api/deny" && req.method === "POST") {
|
||||
if (decisionSettled) {
|
||||
json(res, { ok: true, duplicate: true });
|
||||
return;
|
||||
}
|
||||
let feedback = "Plan rejected by user";
|
||||
let planSaveEnabled = true;
|
||||
let planSaveCustomPath: string | undefined;
|
||||
try {
|
||||
const body = await parseBody(req);
|
||||
feedback = (body.feedback as string) || feedback;
|
||||
if (body.planSave !== undefined) {
|
||||
const ps = body.planSave as { enabled: boolean; customPath?: string };
|
||||
planSaveEnabled = ps.enabled;
|
||||
planSaveCustomPath = ps.customPath;
|
||||
}
|
||||
} catch {
|
||||
/* use default feedback */
|
||||
}
|
||||
let savedPath: string | undefined;
|
||||
if (planSaveEnabled) {
|
||||
saveAnnotations(slug, feedback, planSaveCustomPath);
|
||||
savedPath = saveFinalSnapshot(
|
||||
slug,
|
||||
"denied",
|
||||
options.plan,
|
||||
feedback,
|
||||
planSaveCustomPath,
|
||||
);
|
||||
}
|
||||
deleteDraft(draftKey);
|
||||
publishDecision({ approved: false, feedback, savedPath });
|
||||
json(res, { ok: true, savedPath });
|
||||
} else {
|
||||
html(res, options.htmlContent);
|
||||
}
|
||||
});
|
||||
|
||||
const { port, portSource } = await listenOnPort(server);
|
||||
|
||||
return {
|
||||
reviewId,
|
||||
port,
|
||||
portSource,
|
||||
url: `http://localhost:${port}`,
|
||||
waitForDecision: () => decisionPromise,
|
||||
onDecision: (listener) => {
|
||||
decisionListeners.add(listener);
|
||||
return () => {
|
||||
decisionListeners.delete(listener);
|
||||
};
|
||||
},
|
||||
...(donePromise && { waitForDone: () => donePromise }),
|
||||
stop: () => server.close(),
|
||||
};
|
||||
}
|
||||
1174
extensions/plannotator/server/serverReview.ts
Normal file
1174
extensions/plannotator/server/serverReview.ts
Normal file
File diff suppressed because it is too large
Load Diff
23
extensions/plannotator/skills/plannotator-annotate/SKILL.md
Normal file
23
extensions/plannotator/skills/plannotator-annotate/SKILL.md
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
name: plannotator-annotate
|
||||
description: Open Plannotator's annotation UI for a markdown file, converted HTML file, URL, or folder and then respond to the returned annotations.
|
||||
---
|
||||
|
||||
# Plannotator Annotate
|
||||
|
||||
Use this skill when the user wants to annotate a document in Plannotator instead of reviewing it inline in chat.
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
plannotator annotate <path-or-url>
|
||||
```
|
||||
|
||||
Behavior:
|
||||
|
||||
1. Launch the command with Bash.
|
||||
2. Wait for the browser review to finish.
|
||||
3. If annotations are returned, address them directly.
|
||||
4. If the session closes without feedback, say so briefly and continue.
|
||||
|
||||
Do not ask the user to paste a shell command into the chat. Run the command yourself.
|
||||
@@ -0,0 +1,5 @@
|
||||
interface:
|
||||
display_name: "Plannotator Annotate"
|
||||
short_description: "Annotate a markdown file, URL, or folder in Plannotator."
|
||||
policy:
|
||||
allow_implicit_invocation: false
|
||||
574
extensions/plannotator/skills/plannotator-compound/SKILL.md
Normal file
574
extensions/plannotator/skills/plannotator-compound/SKILL.md
Normal file
@@ -0,0 +1,574 @@
|
||||
---
|
||||
name: plannotator-compound
|
||||
disable-model-invocation: true
|
||||
description: >
|
||||
Analyze a user's Plannotator plan archive to extract denial patterns, feedback
|
||||
taxonomy, evolution over time, and actionable prompt improvements — then produce
|
||||
a polished HTML dashboard report. Falls back to Claude Code ExitPlanMode denial
|
||||
reasons when Plannotator data is unavailable.
|
||||
---
|
||||
|
||||
# Compound Planning Analysis
|
||||
|
||||
You are conducting a comprehensive research analysis of a user's Plannotator plan
|
||||
archive. The goal: extract patterns from their denied plans, reduce
|
||||
them into actionable insights, and produce an elegant HTML dashboard report.
|
||||
|
||||
This is a multi-phase process. Each phase must complete fully before the next begins.
|
||||
Research integrity is paramount — every file must be read, no skipping.
|
||||
|
||||
## Source Selection
|
||||
|
||||
Before starting the analysis, determine which data source is available.
|
||||
|
||||
1. **Plannotator mode (first-class)** — Check `~/.plannotator/plans/`. If it
|
||||
exists and contains `*-denied.md` files, use this mode. The entire workflow
|
||||
below is written for Plannotator data.
|
||||
|
||||
2. **Claude Code fallback mode** — If the Plannotator archive is absent or
|
||||
contains no denied plans, check `~/.claude/projects/`. If present, read
|
||||
[references/claude-code-fallback.md](references/claude-code-fallback.md)
|
||||
before continuing. That reference explains how to use the bundled parser at
|
||||
[scripts/extract_exit_plan_mode_outcomes.py](scripts/extract_exit_plan_mode_outcomes.py)
|
||||
to extract denial reasons from Claude Code JSONL transcripts. Every phase
|
||||
below has a short note explaining what changes in fallback mode — the
|
||||
reference file has the details.
|
||||
|
||||
3. **Neither available** — Ask the user for their Plannotator plans directory or
|
||||
Claude Code projects directory. Do not guess.
|
||||
|
||||
## Phase 0: Locate Plans & Check for Previous Reports
|
||||
|
||||
Use the mode chosen in Source Selection above.
|
||||
|
||||
**Plannotator mode:** Verify the plans directory contains `*-denied.md` files. If
|
||||
none exist, fall back to Claude Code mode before stopping.
|
||||
|
||||
**Claude Code fallback mode:** Run the bundled parser per the fallback reference to
|
||||
build the denial-reason dataset. Create `/tmp/compound-planning/` if needed.
|
||||
|
||||
In either mode, proceed to Previous Report Detection below.
|
||||
|
||||
### Previous Report Detection
|
||||
|
||||
After locating the plans directory, check for existing reports:
|
||||
|
||||
```
|
||||
ls ~/.plannotator/plans/compound-planning-report*.html
|
||||
```
|
||||
|
||||
Reports follow a versioned naming scheme:
|
||||
- First report: `compound-planning-report.html`
|
||||
- Subsequent reports: `compound-planning-report-v2.html`, `compound-planning-report-v3.html`, etc.
|
||||
|
||||
If one or more reports exist, determine the **latest** one (highest version number).
|
||||
Get its filesystem modification date using `stat` (macOS: `stat -f %Sm -t %Y-%m-%d`,
|
||||
Linux: `stat -c %y | cut -d' ' -f1`). This is the **cutoff date**.
|
||||
|
||||
Present the user with a choice:
|
||||
|
||||
> "I found a previous report (`compound-planning-report-v{N}.html`) last updated
|
||||
> on {CUTOFF_DATE}. I can either:
|
||||
>
|
||||
> 1. **Incremental** — Only analyze files dated after {CUTOFF_DATE}, saving tokens
|
||||
> and building on previous findings
|
||||
> 2. **Full** — Re-analyze the entire archive from scratch
|
||||
>
|
||||
> Which would you prefer?"
|
||||
|
||||
Wait for the user's response before proceeding.
|
||||
|
||||
**If incremental:** Filter all subsequent phases to only process files with dates
|
||||
after the cutoff date. The new report version will note in its header narrative that
|
||||
it covers the period from {CUTOFF_DATE} to present, and reference the previous
|
||||
report for earlier findings. The inventory (Phase 1) should still count ALL files
|
||||
for overall stats, but clearly separate "new since last report" counts.
|
||||
|
||||
**If full:** Proceed normally with all files, but still use the next version number
|
||||
for the output filename.
|
||||
|
||||
**If no previous report exists:** Proceed normally. The output filename will be
|
||||
`compound-planning-report.html` (no version suffix for the first report).
|
||||
|
||||
## Phase 1: Inventory
|
||||
|
||||
Count and report the dataset. **Always count ALL files** for overall stats,
|
||||
regardless of whether this is an incremental or full run:
|
||||
|
||||
```
|
||||
- *-approved.md files (count)
|
||||
- *-denied.md files (count)
|
||||
- Date range (earliest to latest date found in filenames)
|
||||
- Total days spanned
|
||||
- Revision rate: denied / (approved + denied) — this is the "X% of plans
|
||||
revised before coding" stat used in dashboard section 1
|
||||
```
|
||||
|
||||
**Note:** Ignore `*.annotations.md` files entirely. Denied files already contain
|
||||
the full plan text plus all reviewer feedback appended after a `---` separator.
|
||||
Annotation files are redundant subsets of this content — reading both would
|
||||
double-count feedback.
|
||||
|
||||
**If incremental mode:** After the total counts, separately report the counts for
|
||||
files dated after the cutoff date only:
|
||||
|
||||
```
|
||||
New since {CUTOFF_DATE}:
|
||||
- *-denied.md files: X (of Y total)
|
||||
- New date range: {CUTOFF_DATE} to {LATEST_DATE}
|
||||
- New days spanned: N
|
||||
```
|
||||
|
||||
If fewer than 3 new denied files exist since the cutoff, warn the user:
|
||||
> "Only {N} new denied plans since the last report. The incremental analysis may
|
||||
> be thin. Would you like to proceed or switch to a full analysis?"
|
||||
|
||||
Also run `wc -l` across all `*-approved.md` files to get average lines per
|
||||
approved plan. This tells the user whether their plans are staying lightweight
|
||||
or bloating over time. You do not need to read approved plan contents — just
|
||||
their line counts. If possible, break this down by time period (e.g., monthly)
|
||||
to show whether plan size changed.
|
||||
|
||||
Dates appear in filenames in YYYY-MM-DD format, sometimes as a prefix
|
||||
(2026-01-07-name-approved.md) and sometimes embedded (name-2026-03-15-approved.md).
|
||||
Extract dates from all filenames.
|
||||
|
||||
Tell the user what you found and that you're beginning the extraction.
|
||||
|
||||
**Claude Code fallback mode:** The Plannotator inventory fields above do not apply.
|
||||
Follow the inventory instructions in
|
||||
[references/claude-code-fallback.md](references/claude-code-fallback.md) instead —
|
||||
report the denial-reason dataset assembled by the parser.
|
||||
|
||||
## Phase 2: Map — Parallel Extraction
|
||||
|
||||
This is the most time-intensive phase. You must read EVERY `*-denied.md` file
|
||||
**in scope**. Do not skip files. Do not summarize early.
|
||||
|
||||
**In scope** means: all denied files if running a full analysis, or only denied
|
||||
files dated after the cutoff date if running incrementally. In incremental mode,
|
||||
only process files whose embedded YYYY-MM-DD date is strictly after the cutoff.
|
||||
|
||||
**Claude Code fallback mode:** The parser output is the clean source dataset. Read
|
||||
the fallback reference for the extraction prompt and batching strategy specific to
|
||||
JSON part files. Do not go back to raw `.jsonl` logs unless the parser fails or the
|
||||
user asks for audit-level verification.
|
||||
|
||||
**Important:** Only read `*-denied.md` files. Do NOT read approved plans,
|
||||
annotation files, or diff files. Each denied file contains the full plan text
|
||||
followed by a `---` separator and the reviewer's feedback — everything needed
|
||||
for analysis is in one file.
|
||||
|
||||
### Batching Strategy
|
||||
|
||||
All extraction agents should use `model: "haiku"` — they're doing straightforward
|
||||
file reading and structured extraction, not reasoning. Haiku is faster and cheaper
|
||||
for this work.
|
||||
|
||||
The approach depends on dataset size:
|
||||
|
||||
**Tiny datasets (≤ 10 total files):** Read all files directly in the main agent —
|
||||
no need for sub-agents. Just read them sequentially and proceed to Phase 3.
|
||||
|
||||
**Small datasets (11-30 files):** Launch 2-3 parallel Haiku agents, splitting
|
||||
files roughly evenly.
|
||||
|
||||
**Medium datasets (31-80 files):** Launch 4-6 parallel Haiku agents (~10-15 files
|
||||
each). Split by file type and/or time period.
|
||||
|
||||
**Large datasets (80+ files):** Launch as many parallel Haiku agents as needed to
|
||||
keep each batch around 10-15 files. Split by the natural time boundaries in the
|
||||
data (months, quarters, or whatever groupings produce balanced batches). If one
|
||||
time period dominates (e.g., the most recent month has 3x the files), split that
|
||||
period into multiple batches.
|
||||
|
||||
Launch all extraction agents in parallel using the Agent tool with
|
||||
`run_in_background: true` and `model: "haiku"`.
|
||||
|
||||
### Output Files
|
||||
|
||||
Each extraction agent must write its results to a clean output file rather than
|
||||
relying on the agent task output (which contains interleaved JSONL framework
|
||||
logs that are difficult to parse). Instruct each agent to write to:
|
||||
|
||||
```
|
||||
/tmp/compound-planning/extraction-{batch-name}.md
|
||||
```
|
||||
|
||||
Create the `/tmp/compound-planning/` directory before launching agents. The
|
||||
reduce agent in Phase 3 will read these clean files directly.
|
||||
|
||||
### Extraction Prompt
|
||||
|
||||
Each agent receives this instruction (adapt the time period, file list, and
|
||||
output path):
|
||||
|
||||
```
|
||||
You are extracting structured data from denied plan files for a pattern analysis.
|
||||
|
||||
Directory: [PLANS DIRECTORY]
|
||||
Files to read: [LIST OF SPECIFIC *-denied.md FILES]
|
||||
Output: Write your complete results to [OUTPUT FILE PATH]
|
||||
|
||||
Each denied file contains two parts separated by a --- line:
|
||||
1. The plan text (above the ---)
|
||||
2. The reviewer's feedback and annotations (below the ---)
|
||||
|
||||
Read EVERY file in your list. For EACH file, extract:
|
||||
- The plan name/topic (from the plan text above the ---)
|
||||
- The denial reason or feedback given (from below the --- — capture the actual
|
||||
words used)
|
||||
- What was specifically asked to change
|
||||
- The type of feedback (let the content determine the category — don't force-fit
|
||||
into predefined types. Common types include things like: scope concerns,
|
||||
approach disagreements, missing information, process requirements, quality
|
||||
concerns, UX/design issues, naming disputes, clarification requests,
|
||||
testing/procedural denials — but the user's actual patterns may differ)
|
||||
- Any specific phrases or recurring language from the reviewer
|
||||
- Individual annotations if present (numbered feedback items with quoted text
|
||||
and reviewer comments)
|
||||
- The date (extracted from the filename)
|
||||
|
||||
Do NOT skip any files. One entry per file.
|
||||
|
||||
Format each entry as:
|
||||
**[filename]**
|
||||
- Date: ...
|
||||
- Topic: ...
|
||||
- Denial reason: ...
|
||||
- Feedback type: ...
|
||||
- Specific asks: ...
|
||||
- Notable phrases: ...
|
||||
- Annotations: [count, with brief summary of each]
|
||||
---
|
||||
|
||||
After processing all files, write the complete results to [OUTPUT FILE PATH].
|
||||
State the total file count at the end of the file.
|
||||
```
|
||||
|
||||
### While Agents Run
|
||||
|
||||
Track completion. As each agent finishes, note the count of files it processed.
|
||||
Verify the total matches the inventory from Phase 1. If any agent's count is
|
||||
short, flag it and consider re-launching for the missing files.
|
||||
|
||||
If an agent times out (possible with large batches — a batch of 128 files can
|
||||
take 8+ minutes), re-launch it for just the unprocessed files. Check the output
|
||||
file to see how far it got before timing out.
|
||||
|
||||
## Phase 3: Reduce — Pattern Analysis
|
||||
|
||||
Once ALL extraction agents have completed (or all files have been read for tiny
|
||||
datasets), proceed with the reduction. Reduction agents should use `model: "sonnet"`
|
||||
— this phase requires real analytical reasoning, not just file reading.
|
||||
|
||||
### Reduction Strategy
|
||||
|
||||
The approach depends on how many extraction files were produced:
|
||||
|
||||
**Standard (≤ 20 extraction files):** Launch a single Sonnet agent to read all
|
||||
extraction files and produce the full analysis. This covers most datasets.
|
||||
|
||||
**Large (21+ extraction files):** Use a two-stage reduce:
|
||||
|
||||
1. **Stage 1 — Partial reduces:** Split the extraction files into groups of 4-6.
|
||||
Launch parallel Sonnet agents, each reading one group and producing a partial
|
||||
analysis with the same sections listed below. Each writes to
|
||||
`/tmp/compound-planning/partial-reduce-{N}.md`.
|
||||
|
||||
2. **Stage 2 — Final reduce:** A single Sonnet agent reads all partial reduce
|
||||
files and synthesizes them into the final comprehensive analysis. This agent
|
||||
merges taxonomies, combines counts, deduplicates patterns, and reconciles any
|
||||
conflicting categorizations across partials.
|
||||
|
||||
**Claude Code fallback mode:** The reduction phase is the same. The only upstream
|
||||
difference is that extraction files were derived from normalized denial-reason JSON
|
||||
instead of Plannotator markdown files.
|
||||
|
||||
### Reduction Prompt
|
||||
|
||||
Give each reduction agent this prompt (adapt file paths for single vs multi-stage):
|
||||
|
||||
```
|
||||
You are a data scientist conducting the reduction phase of a map-reduce analysis
|
||||
across a user's denied plan archive.
|
||||
|
||||
Read ALL extraction files at [FILE PATHS]
|
||||
|
||||
These files contain structured extractions from every denied plan file. Each
|
||||
extraction includes the plan topic, denial feedback, annotations, and reviewer
|
||||
language. Your job: aggregate everything, find patterns, cluster into a taxonomy,
|
||||
and produce a comprehensive analysis.
|
||||
|
||||
Be exhaustive. Use real counts. Quote real phrases from the data. This is
|
||||
research — no hand-waving, no fabrication.
|
||||
|
||||
Write your complete results to [OUTPUT FILE PATH].
|
||||
|
||||
Produce the following sections:
|
||||
[... sections listed below ...]
|
||||
```
|
||||
|
||||
The reduction agent's job is to let the data speak. Do not impose a predetermined
|
||||
framework — discover what's actually there. The analysis must produce:
|
||||
|
||||
### 1. Denial Reason Taxonomy
|
||||
Categorize every denial into a finite set of types that emerge from the data. Count
|
||||
occurrences. Show percentages. Include real example quotes for each type. Aim for
|
||||
8-15 categories — enough to be specific, few enough to be scannable. Let the user's
|
||||
actual feedback determine what the categories are.
|
||||
|
||||
### 2. Top Feedback Patterns (ranked by frequency)
|
||||
The 5-10 most recurring patterns. For each: what the reviewer consistently asks for,
|
||||
3+ example quotes from different files, and whether the pattern changed over time.
|
||||
|
||||
### 3. Recurring Phrases
|
||||
Exact phrases the reviewer uses repeatedly, with counts and what they signal. These
|
||||
are the reviewer's vocabulary — their shorthand for what they care about.
|
||||
|
||||
### 4. What the Reviewer Values (implicit preferences)
|
||||
Derived from patterns — what does this specific person care about most? Quality?
|
||||
Speed? Narrative? Architecture? Process? Simplicity? Rank by evidence strength.
|
||||
This section should feel like a personality profile of the reviewer's standards.
|
||||
|
||||
### 5. What Agents Consistently Get Wrong
|
||||
The flip side — what recurring mistakes trigger denials? What should agents stop
|
||||
doing for this reviewer?
|
||||
|
||||
### 6. Structural Requests
|
||||
What plan structure does the reviewer consistently demand? Required sections,
|
||||
ordering, format preferences, level of detail expected.
|
||||
|
||||
### 7. Evolution Over Time
|
||||
How feedback patterns changed across the time span. Group by whatever natural time
|
||||
boundaries exist in the data (weeks for short spans, months for longer ones). Did
|
||||
expectations mature? Did new patterns emerge? What shifted? If the dataset spans
|
||||
less than a month, note that evolution analysis is limited but still look for any
|
||||
progression from early to late files.
|
||||
|
||||
### 8. Actionable Prompt Instructions
|
||||
The most important output. Based on all patterns: specific numbered instructions
|
||||
that could be embedded in a planning prompt to prevent the most common denial
|
||||
reasons. Write these as actual directives an agent could follow. Be specific to
|
||||
this user's patterns — generic advice like "write good plans" is worthless. Each
|
||||
instruction should trace back to a real, frequent denial pattern.
|
||||
|
||||
After writing the instructions, calculate what percentage of denials they would
|
||||
address (count how many denials fall into categories covered by the instructions
|
||||
vs total denials). Report this percentage — it will be different for every user.
|
||||
|
||||
## Phase 4: Generate the HTML Dashboard
|
||||
|
||||
Build a single, self-contained HTML file as the final deliverable. Save it to
|
||||
the user's plans directory with a versioned filename:
|
||||
|
||||
- First ever report: `compound-planning-report.html`
|
||||
- Second report: `compound-planning-report-v2.html`
|
||||
- Third report: `compound-planning-report-v3.html`
|
||||
- And so on.
|
||||
|
||||
The version number was determined in Phase 0 based on existing reports found.
|
||||
|
||||
**If this is an incremental report**, the header should indicate the analysis
|
||||
period (e.g., "March 15 – March 31, 2026") and include a subtitle noting
|
||||
"Incremental analysis — see v{N-1} for earlier findings." The narrative in
|
||||
section 1 should frame findings as what's new or changed since the last report,
|
||||
not as a complete picture. Overall stats in the header (file counts, revision
|
||||
rate) should still reflect the full archive for context.
|
||||
|
||||
Read the template at `assets/report-template.html` for the **design language
|
||||
only**. The template contains example data from a previous analysis — ignore all
|
||||
data values, quotes, and percentages in the template. Use only its visual design:
|
||||
colors, typography, spacing, component styles, and layout patterns.
|
||||
|
||||
### Design Language (from template)
|
||||
|
||||
- **Palette:** Light mode, warm off-white (#FDFCFB), text in slate scale, amber
|
||||
for highlights/accents, emerald for positive, rose for negative, indigo for
|
||||
action elements
|
||||
- **Typography:** Playfair Display (serif, for narrative headings), Inter (sans,
|
||||
for body/data), JetBrains Mono (mono, for code/phrases) — Google Fonts CDN
|
||||
- **Layout:** Single-column, max-width 1024px, generous vertical whitespace (128px
|
||||
between major sections), editorial/narrative-first aesthetic
|
||||
- **Tone:** Calm, reflective, authoritative. Like a personal retrospective journal,
|
||||
not a monitoring dashboard.
|
||||
|
||||
### Page Frame (header + footer)
|
||||
|
||||
Before the 7 sections, the page has:
|
||||
|
||||
- **Header:** Report title on the left (Playfair Display, ~36px), project name +
|
||||
date range below it in light meta text. On the right: file counts in mono
|
||||
(e.g., "223 denials · 71 days"). Separated from content by
|
||||
a bottom border. Generous bottom padding before section 1.
|
||||
|
||||
- **Footer:** After section 7. Top border, centered italic Playfair Display tagline
|
||||
summarizing the corpus (e.g., "Analysis of X denied plans from the Plannotator
|
||||
archive.").
|
||||
|
||||
### Dashboard Section Order (7 sections)
|
||||
|
||||
The report follows this exact section order. Each section builds on the previous
|
||||
one — the flow moves from "what happened" through "why" to "what to do about it":
|
||||
|
||||
1. **The story in the data** — An editorial narrative paragraph (Playfair Display
|
||||
serif, ~26px) that tells the headline finding in prose. Not bullet points — a
|
||||
real paragraph that reads like the opening of an article. Alongside it, a KPI
|
||||
sidebar with 3 key metrics (the top denial percentage, the overall revision
|
||||
rate, and the number of distinct denial categories found). Use an amber inline
|
||||
highlight on the most striking number in the narrative.
|
||||
|
||||
2. **Why plans get denied** — The taxonomy as a ranked list. Each row: rank number
|
||||
(mono), category label, a thin 4px progress bar (top item in amber-500, rest
|
||||
in slate-300), percentage (mono), and for the top entries, a real italic quote
|
||||
from the data below the label. Show the top 10 categories or however many the
|
||||
data supports (minimum 5).
|
||||
|
||||
3. **How expectations evolved** — One card per natural time period. Each card has:
|
||||
the period name in serif, a theme phrase in colored uppercase (different color
|
||||
per period to show progression), a description paragraph, and a stat line at
|
||||
the bottom (e.g., "X denials · Y narrative requests"). If the data spans less
|
||||
than 3 distinct periods, use 2 cards or even a single card with internal
|
||||
progression noted.
|
||||
|
||||
4. **What works vs what doesn't** — Two side-by-side cards. Left: green-tinted
|
||||
(emerald-50/50 bg, emerald-100 border) with traits of plans that succeed for
|
||||
this reviewer. Right: red-tinted (rose-50/50 bg, rose-100 border) with what
|
||||
agents keep getting wrong. Both derived from the reduction analysis. Bulleted
|
||||
with small colored dots. 5-8 items per card.
|
||||
|
||||
5. **The actionable output** — The diagnostic payoff. Opens with a Playfair
|
||||
Display narrative sentence stating how many prompt instructions were derived
|
||||
and what estimated percentage of denials they address (use the real calculated
|
||||
percentage from Phase 3, not a generic number). Then the top 3 most impactful
|
||||
improvements as numbered items, each with an amber number, bold title, and
|
||||
one-line description. This section bridges the analysis and the full prompt
|
||||
that follows.
|
||||
|
||||
6. **Your most-used phrases** — Grid of chips (2-col mobile, 3-col desktop). Each
|
||||
chip: monospace quoted phrase on the left, frequency count on the right. White
|
||||
bg, slate-200 border, rounded-12px. Show 9-12 of the most recurring phrases
|
||||
found. These should be the reviewer's actual words — their verbal fingerprint.
|
||||
|
||||
7. **The corrective prompt** — Dark panel (slate-900 bg, white text, rounded-3xl,
|
||||
shadow-xl). Opens with a Playfair intro sentence about the instructions. Then
|
||||
a dark code block (slate-800/80 bg, amber-200 monospace text) containing the
|
||||
full numbered prompt instructions from Phase 3. Include a copy-to-clipboard
|
||||
button that works (JS included). Below the code block: a gradient glow card
|
||||
(indigo-to-purple blurred halo behind a white card) with a closing message
|
||||
that these instructions are personal — derived from the user's own feedback,
|
||||
their own language, their own standards.
|
||||
|
||||
### Adaptation Rules
|
||||
|
||||
- If the user has < 3 months of data, reduce the evolution section to fewer cards
|
||||
- If most denied files lack feedback below the `---` (bare denials with no
|
||||
annotations), note this in the narrative — the analysis will be thinner
|
||||
- **Claude Code fallback mode:** Explicitly label the report source as Claude Code
|
||||
`ExitPlanMode` denial reasons. Do not fabricate Plannotator-only fields such as
|
||||
annotation counts or approved-plan line counts. See the fallback reference for
|
||||
KPI substitutes and footer/provenance guidance.
|
||||
- If fewer than 5 denial categories emerge, combine the taxonomy and patterns
|
||||
sections into one
|
||||
- If the dataset is very small (< 20 files), the narrative should acknowledge the
|
||||
limited sample size and frame findings as preliminary
|
||||
- The number of prompt instructions will vary per user — could be 8 or 20. Don't
|
||||
force exactly 17. Let the data determine the count.
|
||||
- The top 3 actionable items in section 5 must be the 3 that cover the largest
|
||||
share of denials, not the 3 that sound most impressive
|
||||
|
||||
### Key Rules
|
||||
|
||||
1. Every number must come from the real analysis — no fabricated data
|
||||
2. Every quote must be a real quote from a real file
|
||||
3. The taxonomy percentages must be calculated from real counts
|
||||
4. The prompt instructions must trace back to actual denial patterns
|
||||
5. The copy button on the prompt block must work (include the JS)
|
||||
|
||||
After generating, open the file in the user's browser.
|
||||
|
||||
## Phase 5: Summary
|
||||
|
||||
Tell the user:
|
||||
- How many denied files were analyzed
|
||||
- If incremental: how many were new since the last report
|
||||
- The top 3 denial patterns found
|
||||
- The estimated percentage of denials the prompt instructions would address
|
||||
- The single most impactful prompt improvement
|
||||
- Where the report was saved (including version number)
|
||||
- If incremental: remind the user that earlier findings are in the previous report
|
||||
|
||||
**Claude Code fallback mode:** Adapt the summary per the fallback reference —
|
||||
report human denial reasons analyzed and total `ExitPlanMode` attempts scanned
|
||||
instead of Plannotator file counts.
|
||||
|
||||
## Phase 6: Improvement Hook
|
||||
|
||||
After presenting the summary, ask the user if they want to enable an **improvement
|
||||
hook** — this takes the corrective prompt instructions from section 7 of the report
|
||||
and writes them to a file that Plannotator's `EnterPlanMode` hook can inject into
|
||||
every future planning session automatically.
|
||||
|
||||
> "Would you like to enable the improvement hook? This will save the corrective
|
||||
> prompt instructions to a file that gets automatically injected into all future
|
||||
> planning sessions — so Claude sees your feedback patterns before writing any plan."
|
||||
|
||||
**If yes:**
|
||||
|
||||
The hook file lives at:
|
||||
|
||||
```
|
||||
~/.plannotator/hooks/compound/enterplanmode-improve-hook.txt
|
||||
```
|
||||
|
||||
Create the `~/.plannotator/hooks/compound/` directory if it doesn't exist.
|
||||
|
||||
The file contents should be the corrective prompt instructions from Phase 3 —
|
||||
the same numbered list that appears in section 7 of the HTML report. Write them
|
||||
as plain text, one instruction per line, prefixed with their number. No HTML, no
|
||||
markdown fences, no preamble — just the instructions themselves. The hook system
|
||||
will inject this file's contents as-is into the planning context.
|
||||
|
||||
**If the file already exists:**
|
||||
|
||||
Read the existing file and present the user with a choice:
|
||||
|
||||
> "An improvement hook already exists from a previous analysis. I can:
|
||||
>
|
||||
> 1. **Replace** — Overwrite with the new instructions (the old ones are gone)
|
||||
> 2. **Merge** — Combine both, deduplicating overlapping instructions and
|
||||
> keeping the best version of each
|
||||
> 3. **Keep existing** — Leave the current hook as-is, skip this step
|
||||
>
|
||||
> Which would you prefer?"
|
||||
|
||||
- **Replace:** Overwrite the file with the new instructions.
|
||||
- **Merge:** Read the existing instructions, compare with the new ones, and
|
||||
produce a merged set. Remove duplicates (same intent even if worded differently).
|
||||
When two instructions cover the same pattern, keep the more specific or
|
||||
actionable version. Re-number the final list sequentially. Write the merged
|
||||
result to the file. Show the user what changed (added N new, removed N
|
||||
redundant, kept N existing).
|
||||
- **Keep existing:** Do nothing, move on.
|
||||
|
||||
**If no:** Skip this phase entirely.
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **Data source priority:** Plannotator is the first-class path. Claude Code log
|
||||
analysis is the secondary path for users without Plannotator archives.
|
||||
- **Research integrity:** Every file must be read. The value of this analysis comes
|
||||
from completeness. Sampling or skipping undermines the findings.
|
||||
- **Real data only:** Never fabricate quotes, percentages, or patterns. If the data
|
||||
doesn't show a clear pattern, say so honestly rather than inventing one.
|
||||
- **Let the data lead:** The taxonomy, patterns, and instructions should emerge from
|
||||
what's actually in the files. Different users will have completely different
|
||||
denial patterns. A user building mobile apps will have different feedback than
|
||||
one building APIs. Don't assume what the patterns will be.
|
||||
- **Agent parallelization:** For large datasets, maximize parallel agents to reduce
|
||||
wall-clock time. The bottleneck is the largest batch — split it.
|
||||
- **Structured extraction format:** Ask extraction agents to return structured text
|
||||
with consistent delimiters so the reduce agent can parse reliably.
|
||||
- **The report is the artifact:** The HTML dashboard is what the user keeps. It
|
||||
should be beautiful, honest, and useful. Every section should feel like it was
|
||||
written about them specifically, because it was.
|
||||
@@ -0,0 +1,795 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Compound Planning — What 370 Files Reveal</title>
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||
<link href="https://fonts.googleapis.com/css2?family=Playfair+Display:ital,wght@0,400;0,500;1,400&family=Inter:wght@300;400;500;600&family=JetBrains+Mono:wght@400;500&display=swap" rel="stylesheet">
|
||||
<style>
|
||||
*, *::before, *::after { margin: 0; padding: 0; box-sizing: border-box; }
|
||||
|
||||
:root {
|
||||
--bg: #FDFCFB;
|
||||
--slate-900: #0f172a;
|
||||
--slate-800: #1e293b;
|
||||
--slate-700: #334155;
|
||||
--slate-600: #475569;
|
||||
--slate-500: #64748b;
|
||||
--slate-400: #94a3b8;
|
||||
--slate-300: #cbd5e1;
|
||||
--slate-200: #e2e8f0;
|
||||
--slate-100: #f1f5f9;
|
||||
--slate-50: #f8fafc;
|
||||
--amber-500: #f59e0b;
|
||||
--amber-600: #d97706;
|
||||
--amber-700: #b45309;
|
||||
--amber-50: #fffbeb;
|
||||
--emerald-500: #10b981;
|
||||
--emerald-600: #059669;
|
||||
--emerald-400: #34d399;
|
||||
--emerald-900: #064e3b;
|
||||
--emerald-800: #065f46;
|
||||
--emerald-100: #d1fae5;
|
||||
--emerald-50: #ecfdf5;
|
||||
--rose-500: #f43f5e;
|
||||
--rose-600: #e11d48;
|
||||
--rose-400: #fb7185;
|
||||
--rose-900: #881337;
|
||||
--rose-800: #9f1239;
|
||||
--rose-100: #ffe4e6;
|
||||
--rose-50: #fff1f2;
|
||||
--indigo-500: #6366f1;
|
||||
--indigo-600: #4f46e5;
|
||||
--purple-600: #9333ea;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: 'Inter', ui-sans-serif, system-ui, sans-serif;
|
||||
background: var(--bg);
|
||||
color: var(--slate-800);
|
||||
-webkit-font-smoothing: antialiased;
|
||||
}
|
||||
|
||||
.container {
|
||||
max-width: 1024px;
|
||||
margin: 0 auto;
|
||||
padding: 48px 24px 64px;
|
||||
}
|
||||
@media (min-width: 768px) { .container { padding: 96px 24px 80px; } }
|
||||
|
||||
/* Typography */
|
||||
.font-serif { font-family: 'Playfair Display', ui-serif, Georgia, serif; }
|
||||
.font-mono { font-family: 'JetBrains Mono', ui-monospace, monospace; }
|
||||
|
||||
/* Header */
|
||||
header {
|
||||
border-bottom: 1px solid var(--slate-200);
|
||||
padding-bottom: 40px;
|
||||
margin-bottom: 96px;
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: flex-end;
|
||||
flex-wrap: wrap;
|
||||
gap: 16px;
|
||||
}
|
||||
header h1 {
|
||||
font-family: 'Playfair Display', serif;
|
||||
font-size: 36px;
|
||||
font-weight: 400;
|
||||
color: var(--slate-900);
|
||||
line-height: 1.2;
|
||||
}
|
||||
header .meta {
|
||||
font-size: 15px;
|
||||
font-weight: 300;
|
||||
color: var(--slate-500);
|
||||
letter-spacing: 0.04em;
|
||||
}
|
||||
|
||||
/* Sections */
|
||||
.section { margin-bottom: 128px; }
|
||||
.section-label {
|
||||
font-size: 12px;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.2em;
|
||||
color: var(--slate-400);
|
||||
margin-bottom: 24px;
|
||||
}
|
||||
|
||||
/* Narrative + KPIs */
|
||||
.summary {
|
||||
display: grid;
|
||||
grid-template-columns: 1fr;
|
||||
gap: 48px;
|
||||
align-items: start;
|
||||
}
|
||||
@media (min-width: 768px) {
|
||||
.summary { grid-template-columns: 1fr 240px; }
|
||||
}
|
||||
.narrative {
|
||||
font-family: 'Playfair Display', serif;
|
||||
font-size: 26px;
|
||||
line-height: 1.45;
|
||||
color: var(--slate-900);
|
||||
}
|
||||
.narrative .highlight {
|
||||
background: var(--amber-50);
|
||||
color: var(--amber-700);
|
||||
padding: 1px 6px;
|
||||
border-radius: 3px;
|
||||
}
|
||||
.kpi-stack {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 32px;
|
||||
}
|
||||
@media (min-width: 768px) {
|
||||
.kpi-stack { border-left: 1px solid var(--slate-200); padding-left: 32px; }
|
||||
}
|
||||
.kpi-item .kpi-value {
|
||||
font-size: 36px;
|
||||
font-weight: 300;
|
||||
color: var(--slate-900);
|
||||
letter-spacing: -0.02em;
|
||||
}
|
||||
.kpi-item .kpi-label {
|
||||
font-size: 10px;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.15em;
|
||||
color: var(--slate-500);
|
||||
margin-top: 2px;
|
||||
}
|
||||
|
||||
/* Taxonomy bars */
|
||||
.taxonomy-list { display: flex; flex-direction: column; gap: 20px; }
|
||||
.tax-row { display: grid; grid-template-columns: 24px 1fr 52px; gap: 12px; align-items: center; }
|
||||
.tax-rank {
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
font-size: 12px;
|
||||
color: var(--slate-400);
|
||||
text-align: right;
|
||||
}
|
||||
.tax-body { display: flex; flex-direction: column; gap: 6px; }
|
||||
.tax-label { font-size: 14px; font-weight: 500; color: var(--slate-800); }
|
||||
.tax-bar-track { height: 4px; background: var(--slate-100); border-radius: 100px; overflow: hidden; }
|
||||
.tax-bar-fill { height: 100%; border-radius: 100px; transition: width 0.6s ease; }
|
||||
.tax-bar-fill.top { background: var(--amber-500); }
|
||||
.tax-bar-fill.rest { background: var(--slate-300); }
|
||||
.tax-pct {
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
font-size: 12px;
|
||||
color: var(--slate-500);
|
||||
text-align: right;
|
||||
}
|
||||
.tax-quote {
|
||||
font-size: 12px;
|
||||
font-style: italic;
|
||||
color: var(--slate-500);
|
||||
margin-top: 2px;
|
||||
}
|
||||
|
||||
/* Evolution timeline */
|
||||
.evolution-grid {
|
||||
display: grid;
|
||||
grid-template-columns: 1fr;
|
||||
gap: 24px;
|
||||
}
|
||||
@media (min-width: 768px) { .evolution-grid { grid-template-columns: repeat(3, 1fr); } }
|
||||
.evo-card {
|
||||
background: white;
|
||||
border: 1px solid var(--slate-200);
|
||||
border-radius: 16px;
|
||||
padding: 28px;
|
||||
}
|
||||
.evo-card .evo-month {
|
||||
font-family: 'Playfair Display', serif;
|
||||
font-size: 20px;
|
||||
color: var(--slate-900);
|
||||
margin-bottom: 4px;
|
||||
}
|
||||
.evo-card .evo-theme {
|
||||
font-size: 12px;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.12em;
|
||||
margin-bottom: 16px;
|
||||
}
|
||||
.evo-card .evo-desc {
|
||||
font-size: 14px;
|
||||
color: var(--slate-600);
|
||||
line-height: 1.6;
|
||||
}
|
||||
.evo-card .evo-stat {
|
||||
margin-top: 16px;
|
||||
padding-top: 16px;
|
||||
border-top: 1px solid var(--slate-100);
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
font-size: 12px;
|
||||
color: var(--slate-500);
|
||||
}
|
||||
.evo-jan .evo-theme { color: var(--slate-600); }
|
||||
.evo-feb .evo-theme { color: var(--amber-600); }
|
||||
.evo-mar .evo-theme { color: var(--indigo-600); }
|
||||
|
||||
/* Quality comparison */
|
||||
.quality-grid {
|
||||
display: grid;
|
||||
grid-template-columns: 1fr;
|
||||
gap: 24px;
|
||||
}
|
||||
@media (min-width: 768px) { .quality-grid { grid-template-columns: 1fr 1fr; } }
|
||||
.q-card {
|
||||
border-radius: 24px;
|
||||
padding: 36px;
|
||||
}
|
||||
.q-card.good {
|
||||
background: color-mix(in srgb, var(--emerald-50) 50%, transparent);
|
||||
border: 1px solid var(--emerald-100);
|
||||
}
|
||||
.q-card.bad {
|
||||
background: color-mix(in srgb, var(--rose-50) 50%, transparent);
|
||||
border: 1px solid var(--rose-100);
|
||||
}
|
||||
.q-card .q-icon { font-size: 20px; margin-bottom: 12px; }
|
||||
.q-card .q-title {
|
||||
font-family: 'Playfair Display', serif;
|
||||
font-size: 22px;
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
.q-card.good .q-title { color: var(--emerald-900); }
|
||||
.q-card.bad .q-title { color: var(--rose-900); }
|
||||
.q-list { list-style: none; display: flex; flex-direction: column; gap: 14px; }
|
||||
.q-list li {
|
||||
display: flex;
|
||||
align-items: flex-start;
|
||||
gap: 10px;
|
||||
font-size: 14px;
|
||||
line-height: 1.6;
|
||||
}
|
||||
.q-card.good .q-list li { color: color-mix(in srgb, var(--emerald-800) 90%, transparent); }
|
||||
.q-card.bad .q-list li { color: color-mix(in srgb, var(--rose-800) 90%, transparent); }
|
||||
.q-dot {
|
||||
width: 6px;
|
||||
height: 6px;
|
||||
border-radius: 50%;
|
||||
flex-shrink: 0;
|
||||
margin-top: 7px;
|
||||
}
|
||||
.q-card.good .q-dot { background: var(--emerald-400); }
|
||||
.q-card.bad .q-dot { background: var(--rose-400); }
|
||||
|
||||
/* Phrases */
|
||||
.phrases-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
gap: 12px;
|
||||
}
|
||||
@media (min-width: 768px) { .phrases-grid { grid-template-columns: repeat(3, 1fr); } }
|
||||
.phrase-chip {
|
||||
background: white;
|
||||
border: 1px solid var(--slate-200);
|
||||
border-radius: 12px;
|
||||
padding: 14px 16px;
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
.phrase-text {
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
font-size: 12px;
|
||||
color: var(--slate-700);
|
||||
white-space: nowrap;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
}
|
||||
.phrase-count {
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
font-size: 11px;
|
||||
color: var(--slate-400);
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
/* Dark action panel */
|
||||
.action-panel {
|
||||
background: var(--slate-900);
|
||||
color: white;
|
||||
border-radius: 24px;
|
||||
padding: 40px;
|
||||
box-shadow: 0 20px 25px -5px rgb(0 0 0 / 0.1), 0 8px 10px -6px rgb(0 0 0 / 0.1);
|
||||
}
|
||||
@media (min-width: 768px) { .action-panel { padding: 56px; } }
|
||||
.action-panel .section-label { color: var(--slate-500); }
|
||||
.action-panel .ap-intro {
|
||||
font-family: 'Playfair Display', serif;
|
||||
font-size: 22px;
|
||||
color: white;
|
||||
line-height: 1.4;
|
||||
margin-bottom: 32px;
|
||||
max-width: 640px;
|
||||
}
|
||||
.prompt-block {
|
||||
background: color-mix(in srgb, var(--slate-800) 80%, transparent);
|
||||
border: 1px solid color-mix(in srgb, var(--slate-700) 50%, transparent);
|
||||
border-radius: 16px;
|
||||
overflow: hidden;
|
||||
}
|
||||
.prompt-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
padding: 12px 20px;
|
||||
border-bottom: 1px solid color-mix(in srgb, var(--slate-700) 30%, transparent);
|
||||
}
|
||||
.prompt-header-label {
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
font-size: 12px;
|
||||
color: var(--slate-400);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
.prompt-header-label svg { width: 14px; height: 14px; }
|
||||
.copy-btn {
|
||||
background: none;
|
||||
border: none;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
font-size: 12px;
|
||||
color: var(--slate-400);
|
||||
cursor: pointer;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
transition: color 0.2s;
|
||||
}
|
||||
.copy-btn:hover { color: white; }
|
||||
.copy-btn.copied { color: var(--emerald-400); }
|
||||
.prompt-body {
|
||||
padding: 20px;
|
||||
max-height: 480px;
|
||||
overflow-y: auto;
|
||||
}
|
||||
.prompt-body pre {
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
font-size: 13px;
|
||||
line-height: 1.7;
|
||||
color: color-mix(in srgb, var(--amber-200) 90%, transparent);
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
}
|
||||
.prompt-body pre .comment {
|
||||
color: var(--slate-500);
|
||||
}
|
||||
|
||||
/* Glow card */
|
||||
.glow-wrap {
|
||||
position: relative;
|
||||
margin-top: 48px;
|
||||
}
|
||||
.glow-bg {
|
||||
position: absolute;
|
||||
inset: -2px;
|
||||
background: linear-gradient(135deg, var(--indigo-500), var(--purple-600));
|
||||
border-radius: 26px;
|
||||
opacity: 0.15;
|
||||
filter: blur(16px);
|
||||
transition: opacity 0.5s;
|
||||
}
|
||||
.glow-wrap:hover .glow-bg { opacity: 0.25; }
|
||||
.glow-card {
|
||||
position: relative;
|
||||
background: white;
|
||||
border: 1px solid var(--slate-200);
|
||||
border-radius: 24px;
|
||||
padding: 32px 36px;
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
flex-wrap: wrap;
|
||||
gap: 20px;
|
||||
}
|
||||
.glow-card .gc-text {
|
||||
font-family: 'Playfair Display', serif;
|
||||
font-size: 18px;
|
||||
font-weight: 500;
|
||||
color: var(--slate-900);
|
||||
line-height: 1.5;
|
||||
max-width: 640px;
|
||||
}
|
||||
.glow-card .gc-text em {
|
||||
font-style: italic;
|
||||
color: var(--indigo-600);
|
||||
}
|
||||
|
||||
/* Footer */
|
||||
footer {
|
||||
border-top: 1px solid var(--slate-200);
|
||||
padding-top: 48px;
|
||||
margin-top: 0;
|
||||
text-align: center;
|
||||
}
|
||||
footer p {
|
||||
font-family: 'Playfair Display', serif;
|
||||
font-style: italic;
|
||||
font-size: 15px;
|
||||
color: var(--slate-400);
|
||||
}
|
||||
|
||||
/* Scrollbar in dark code block */
|
||||
.prompt-body::-webkit-scrollbar { width: 6px; }
|
||||
.prompt-body::-webkit-scrollbar-track { background: transparent; }
|
||||
.prompt-body::-webkit-scrollbar-thumb { background: var(--slate-700); border-radius: 3px; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
|
||||
<header>
|
||||
<div>
|
||||
<h1>What 370 Files Reveal About<br>How You Plan</h1>
|
||||
<div class="meta" style="margin-top: 8px;">backnotprop/plannotator · Jan 7 – Mar 18, 2026</div>
|
||||
</div>
|
||||
<div class="meta" style="text-align: right;">
|
||||
<span class="font-mono" style="font-size: 12px;">202 denials · 168 annotations · 71 days</span>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- 1. Narrative + KPIs -->
|
||||
<div class="section">
|
||||
<div class="section-label">1. The story in the data</div>
|
||||
<div class="summary">
|
||||
<div class="narrative">
|
||||
Across 71 days you denied or revised <span class="highlight">202 plans</span> before any code was written. The single most common reason—appearing in 1 out of 4 denials—was the same: the agent jumped to implementation without telling you <em>what</em> it was building, <em>why</em>, or <em>how</em>. Missing narrative. Missing context. Missing the story. Your expectations evolved from “does it work?” in January to “tell me the story and be confident” by March.
|
||||
</div>
|
||||
<div class="kpi-stack">
|
||||
<div class="kpi-item">
|
||||
<div class="kpi-value">25.7%</div>
|
||||
<div class="kpi-label">Denials for missing narrative</div>
|
||||
</div>
|
||||
<div class="kpi-item">
|
||||
<div class="kpi-value">50%</div>
|
||||
<div class="kpi-label">Plans revised before coding</div>
|
||||
</div>
|
||||
<div class="kpi-item">
|
||||
<div class="kpi-value">12</div>
|
||||
<div class="kpi-label">Distinct denial categories</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- 2. Denial Taxonomy -->
|
||||
<div class="section">
|
||||
<div class="section-label">2. Why plans get denied</div>
|
||||
<div class="taxonomy-list">
|
||||
<div class="tax-row">
|
||||
<span class="tax-rank">1</span>
|
||||
<div class="tax-body">
|
||||
<span class="tax-label">Missing Narrative / Overview</span>
|
||||
<div class="tax-bar-track"><div class="tax-bar-fill top" style="width: 100%"></div></div>
|
||||
<span class="tax-quote">"This plan is denied without narrative detail and rationales."</span>
|
||||
</div>
|
||||
<span class="tax-pct">25.7%</span>
|
||||
</div>
|
||||
<div class="tax-row">
|
||||
<span class="tax-rank">2</span>
|
||||
<div class="tax-body">
|
||||
<span class="tax-label">Clarification Needed</span>
|
||||
<div class="tax-bar-track"><div class="tax-bar-fill rest" style="width: 65%"></div></div>
|
||||
<span class="tax-quote">"What does this Mean???"</span>
|
||||
</div>
|
||||
<span class="tax-pct">16.8%</span>
|
||||
</div>
|
||||
<div class="tax-row">
|
||||
<span class="tax-rank">3</span>
|
||||
<div class="tax-body">
|
||||
<span class="tax-label">Testing / Procedural</span>
|
||||
<div class="tax-bar-track"><div class="tax-bar-fill rest" style="width: 54%"></div></div>
|
||||
<span class="tax-quote">"I'm denying so you can create a diff."</span>
|
||||
</div>
|
||||
<span class="tax-pct">13.9%</span>
|
||||
</div>
|
||||
<div class="tax-row">
|
||||
<span class="tax-rank">4</span>
|
||||
<div class="tax-body">
|
||||
<span class="tax-label">Wrong Approach / Over-Engineered</span>
|
||||
<div class="tax-bar-track"><div class="tax-bar-fill rest" style="width: 37%"></div></div>
|
||||
<span class="tax-quote">"Why are we doing difficult shit here? I want a hover experience."</span>
|
||||
</div>
|
||||
<span class="tax-pct">9.4%</span>
|
||||
</div>
|
||||
<div class="tax-row">
|
||||
<span class="tax-rank">5</span>
|
||||
<div class="tax-body">
|
||||
<span class="tax-label">Process Requirement</span>
|
||||
<div class="tax-bar-track"><div class="tax-bar-fill rest" style="width: 31%"></div></div>
|
||||
<span class="tax-quote">"Make sure you feature branch."</span>
|
||||
</div>
|
||||
<span class="tax-pct">7.9%</span>
|
||||
</div>
|
||||
<div class="tax-row">
|
||||
<span class="tax-rank">6</span>
|
||||
<div class="tax-body">
|
||||
<span class="tax-label">Confidence / Risk Check</span>
|
||||
<div class="tax-bar-track"><div class="tax-bar-fill rest" style="width: 29%"></div></div>
|
||||
<span class="tax-quote">"Take a step back, breathe, make sure we're not being irrational."</span>
|
||||
</div>
|
||||
<span class="tax-pct">7.4%</span>
|
||||
</div>
|
||||
<div class="tax-row">
|
||||
<span class="tax-rank">7</span>
|
||||
<div class="tax-body">
|
||||
<span class="tax-label">Content Removal</span>
|
||||
<div class="tax-bar-track"><div class="tax-bar-fill rest" style="width: 27%"></div></div>
|
||||
<span class="tax-quote">"I don't want this in the plan."</span>
|
||||
</div>
|
||||
<span class="tax-pct">6.9%</span>
|
||||
</div>
|
||||
<div class="tax-row">
|
||||
<span class="tax-rank">8</span>
|
||||
<div class="tax-body">
|
||||
<span class="tax-label">Implementation Bug Found</span>
|
||||
<div class="tax-bar-track"><div class="tax-bar-fill rest" style="width: 23%"></div></div>
|
||||
</div>
|
||||
<span class="tax-pct">5.9%</span>
|
||||
</div>
|
||||
<div class="tax-row">
|
||||
<span class="tax-rank">9</span>
|
||||
<div class="tax-body">
|
||||
<span class="tax-label">Design / UX Issue</span>
|
||||
<div class="tax-bar-track"><div class="tax-bar-fill rest" style="width: 21%"></div></div>
|
||||
</div>
|
||||
<span class="tax-pct">5.4%</span>
|
||||
</div>
|
||||
<div class="tax-row">
|
||||
<span class="tax-rank">10</span>
|
||||
<div class="tax-body">
|
||||
<span class="tax-label">Naming / Terminology</span>
|
||||
<div class="tax-bar-track"><div class="tax-bar-fill rest" style="width: 16%"></div></div>
|
||||
<span class="tax-quote">"Why do you keep calling it Simplified????"</span>
|
||||
</div>
|
||||
<span class="tax-pct">4.0%</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- 3. Evolution -->
|
||||
<div class="section">
|
||||
<div class="section-label">3. How your expectations evolved</div>
|
||||
<div class="evolution-grid">
|
||||
<div class="evo-card evo-jan">
|
||||
<div class="evo-month">January</div>
|
||||
<div class="evo-theme">"Does it work?"</div>
|
||||
<div class="evo-desc">Bug-hunting phase. You were hands-on testing View Logs, iterating on session scoping heuristics. 60% of denials were implementation bugs and verification failures. No mention of “narrative” or “overview” yet.</div>
|
||||
<div class="evo-stat">26 denials · 0 narrative requests</div>
|
||||
</div>
|
||||
<div class="evo-card evo-feb">
|
||||
<div class="evo-month">February</div>
|
||||
<div class="evo-theme">"Follow the process"</div>
|
||||
<div class="evo-desc">Process gates emerged: feature branches, Linear tickets, pull main. 40% of denials were procedural (diff testing). UX polish intensified. The first narrative demands appeared: “I want a narrative under each section.”</div>
|
||||
<div class="evo-stat">48 denials · 6 narrative requests</div>
|
||||
</div>
|
||||
<div class="evo-card evo-mar">
|
||||
<div class="evo-month">March</div>
|
||||
<div class="evo-theme">"Tell me the story"</div>
|
||||
<div class="evo-desc">Narrative became the #1 gate. You created a “Missing overview” label and applied it systematically. Confidence checks became standard. You began telling agents to “take a step back, breathe, and analyze.”</div>
|
||||
<div class="evo-stat">128 denials · 25+ narrative requests</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- 4. Quality comparison -->
|
||||
<div class="section">
|
||||
<div class="section-label">4. What works vs. what doesn't</div>
|
||||
<div class="quality-grid">
|
||||
<div class="q-card good">
|
||||
<div class="q-icon">✓</div>
|
||||
<div class="q-title">What approved plans do</div>
|
||||
<ul class="q-list">
|
||||
<li><span class="q-dot"></span>Lead with a narrative overview: what exists, what changes, why</li>
|
||||
<li><span class="q-dot"></span>State confidence and identify risks proactively</li>
|
||||
<li><span class="q-dot"></span>Reference existing codebase patterns before proposing new code</li>
|
||||
<li><span class="q-dot"></span>Use explicit, transparent naming (not euphemisms)</li>
|
||||
<li><span class="q-dot"></span>Break large work into phases with evaluation gates</li>
|
||||
<li><span class="q-dot"></span>Include example output for user-facing changes</li>
|
||||
<li><span class="q-dot"></span>Specify feature branch and ticket creation steps</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="q-card bad">
|
||||
<div class="q-icon">✗</div>
|
||||
<div class="q-title">What agents keep getting wrong</div>
|
||||
<ul class="q-list">
|
||||
<li><span class="q-dot"></span>Jump to implementation steps without narrative context</li>
|
||||
<li><span class="q-dot"></span>Over-engineer: Shift+Click when hover works, MCP tool when a README suffices</li>
|
||||
<li><span class="q-dot"></span>Introduce new code for things the codebase already solves</li>
|
||||
<li><span class="q-dot"></span>Propose work on top of failing lint/type checks</li>
|
||||
<li><span class="q-dot"></span>Use vague or euphemistic naming (“Accept” instead of “Git Add”)</li>
|
||||
<li><span class="q-dot"></span>Wait to be asked for confidence instead of stating it</li>
|
||||
<li><span class="q-dot"></span>Rush to modify instead of reporting what they see</li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- 5. The actionable output -->
|
||||
<div class="section">
|
||||
<div class="section-label">5. The actionable output</div>
|
||||
<div class="narrative" style="margin-bottom: 32px;">
|
||||
The analysis produced <span class="highlight">17 specific prompt instructions</span> that, if embedded in a planning prompt, would address ~70% of all denial reasons. The biggest three:
|
||||
</div>
|
||||
<div style="display: flex; flex-direction: column; gap: 20px;">
|
||||
<div style="display: flex; gap: 16px; align-items: flex-start;">
|
||||
<span class="font-mono" style="font-size: 24px; font-weight: 300; color: var(--amber-500); flex-shrink: 0; width: 32px; text-align: right;">1</span>
|
||||
<div>
|
||||
<div style="font-size: 17px; font-weight: 500; color: var(--slate-900); margin-bottom: 4px;">Every plan MUST start with a Solution Overview</div>
|
||||
<div style="font-size: 14px; color: var(--slate-600); line-height: 1.5;">What exists, what changes, why, how. This alone addresses 1 in 4 denials.</div>
|
||||
</div>
|
||||
</div>
|
||||
<div style="display: flex; gap: 16px; align-items: flex-start;">
|
||||
<span class="font-mono" style="font-size: 24px; font-weight: 300; color: var(--amber-500); flex-shrink: 0; width: 32px; text-align: right;">2</span>
|
||||
<div>
|
||||
<div style="font-size: 17px; font-weight: 500; color: var(--slate-900); margin-bottom: 4px;">End every plan with a Confidence Assessment</div>
|
||||
<div style="font-size: 14px; color: var(--slate-600); line-height: 1.5;">Don’t wait to be asked. State your confidence, identify risks, flag uncertainties.</div>
|
||||
</div>
|
||||
</div>
|
||||
<div style="display: flex; gap: 16px; align-items: flex-start;">
|
||||
<span class="font-mono" style="font-size: 24px; font-weight: 300; color: var(--amber-500); flex-shrink: 0; width: 32px; text-align: right;">3</span>
|
||||
<div>
|
||||
<div style="font-size: 17px; font-weight: 500; color: var(--slate-900); margin-bottom: 4px;">Search for existing patterns before proposing new code</div>
|
||||
<div style="font-size: 14px; color: var(--slate-600); line-height: 1.5;">Explicitly state what you found in the codebase. Prefer reuse over new implementation.</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- 6. Recurring phrases -->
|
||||
<div class="section">
|
||||
<div class="section-label">6. Your most-used phrases</div>
|
||||
<div class="phrases-grid">
|
||||
<div class="phrase-chip"><span class="phrase-text">"narrative"</span><span class="phrase-count">50+</span></div>
|
||||
<div class="phrase-chip"><span class="phrase-text">"I don't want this in the plan"</span><span class="phrase-count">10</span></div>
|
||||
<div class="phrase-chip"><span class="phrase-text">"feature branch"</span><span class="phrase-count">8+</span></div>
|
||||
<div class="phrase-chip"><span class="phrase-text">"confidence"</span><span class="phrase-count">8+</span></div>
|
||||
<div class="phrase-chip"><span class="phrase-text">"Missing overview"</span><span class="phrase-count">14</span></div>
|
||||
<div class="phrase-chip"><span class="phrase-text">"front-end design skill"</span><span class="phrase-count">16</span></div>
|
||||
<div class="phrase-chip"><span class="phrase-text">"separation of concerns"</span><span class="phrase-count">6</span></div>
|
||||
<div class="phrase-chip"><span class="phrase-text">"Take a step back, breathe"</span><span class="phrase-count">6</span></div>
|
||||
<div class="phrase-chip"><span class="phrase-text">"how does this work"</span><span class="phrase-count">5+</span></div>
|
||||
<div class="phrase-chip"><span class="phrase-text">"what the fuck"</span><span class="phrase-count">4</span></div>
|
||||
<div class="phrase-chip"><span class="phrase-text">"create a ticket"</span><span class="phrase-count">4+</span></div>
|
||||
<div class="phrase-chip"><span class="phrase-text">"reusable"</span><span class="phrase-count">19+</span></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- 7. Corrective Prompt -->
|
||||
<div class="section" style="margin-bottom: 64px;">
|
||||
<div class="action-panel">
|
||||
<div class="section-label">7. The corrective prompt</div>
|
||||
<div class="ap-intro">
|
||||
These 17 instructions were extracted directly from your denial patterns. Embedding them in a planning prompt would address approximately 70% of all denial reasons.
|
||||
</div>
|
||||
<div class="prompt-block">
|
||||
<div class="prompt-header">
|
||||
<span class="prompt-header-label">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><polyline points="4 17 10 11 4 5"></polyline><line x1="12" y1="19" x2="20" y2="19"></line></svg>
|
||||
planning-instructions.md
|
||||
</span>
|
||||
<button class="copy-btn" onclick="copyPrompt(this)">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><rect x="9" y="9" width="13" height="13" rx="2" ry="2"></rect><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"></path></svg>
|
||||
Copy
|
||||
</button>
|
||||
</div>
|
||||
<div class="prompt-body">
|
||||
<pre id="prompt-content"><span class="comment"># Planning Instructions
|
||||
# Derived from 370 files of denial & annotation analysis</span>
|
||||
|
||||
1. STRUCTURE: Every plan MUST begin with a "Solution Overview"
|
||||
containing 2-3 paragraphs of narrative prose explaining:
|
||||
- What exists today (current state)
|
||||
- What will change and why
|
||||
- How it will be built (approach summary)
|
||||
Do NOT skip this. Do NOT replace it with bullet points.
|
||||
|
||||
2. NARRATIVE: Every major section must include a rationale
|
||||
paragraph — not just what will be done, but WHY this
|
||||
approach was chosen over alternatives.
|
||||
|
||||
3. FEATURE BRANCH: Always specify implementation will occur
|
||||
on a feature branch. State the branch name. Never plan
|
||||
to work directly on main.
|
||||
|
||||
4. EXISTING PATTERNS: Before proposing any new implementation,
|
||||
search the codebase for existing patterns that solve the
|
||||
same problem. Explicitly state what you found and whether
|
||||
you will reuse it. Prefer reuse over new code.
|
||||
|
||||
5. CONFIDENCE STATEMENT: End the plan with a "Confidence
|
||||
Assessment" section. State your confidence level, identify
|
||||
risks or edge cases, and note uncertainties. Do not wait
|
||||
to be asked.
|
||||
|
||||
6. PHASING: For plans with more than 3 steps, break them into
|
||||
numbered phases. After each phase, note "Pause for
|
||||
evaluation" so the reviewer can assess before proceeding.
|
||||
|
||||
7. ISSUE TRACKING: If the project uses Linear or GitHub Issues,
|
||||
include a step to create relevant tickets BEFORE
|
||||
implementation. Backlog items should be separate tickets.
|
||||
|
||||
8. SIMPLICITY: Choose the simplest approach that meets
|
||||
requirements. Do not introduce modifier keys when hover
|
||||
works. Do not build a framework when a README suffices.
|
||||
|
||||
9. NAMING: Use explicit, transparent names for user-facing
|
||||
features. Do not euphemize Git operations ("Git Add"
|
||||
not "Accept"). Match existing product naming conventions.
|
||||
|
||||
10. CODE QUALITY: State that implementation will follow clean
|
||||
code principles: modular architecture, separation of
|
||||
concerns, no circumventing lint or type checks.
|
||||
|
||||
11. CLEAN FOUNDATION: If the codebase has failing lint or type
|
||||
checks, address these BEFORE proposing new features. State
|
||||
the current CI/CD state.
|
||||
|
||||
12. PRIVACY: For features involving data storage or sharing,
|
||||
explicitly state privacy guarantees. Require user
|
||||
confirmation before storing data.
|
||||
|
||||
13. EXAMPLES: When the plan involves user-facing output or UI,
|
||||
include an example of what it will look like.
|
||||
|
||||
14. FOCUSED SCOPE: Do not include sections that are obvious,
|
||||
boilerplate, or previously asked to be removed. Keep the
|
||||
plan focused rather than comprehensive.
|
||||
|
||||
15. DESIGN SKILL: For any frontend/UI work, invoke the
|
||||
front-end design skill to validate the approach. Note
|
||||
this invocation explicitly in the plan.
|
||||
|
||||
16. VERIFICATION STEP: For refactors or multi-file changes,
|
||||
include a verification step with line-by-line comparison
|
||||
of affected code paths.
|
||||
|
||||
17. DELIBERATION: If the plan involves a dramatic shift, state
|
||||
that you have re-evaluated the approach, traced through
|
||||
affected files mentally, and are confident in the plan.
|
||||
Do not rush.</pre>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="glow-wrap">
|
||||
<div class="glow-bg"></div>
|
||||
<div class="glow-card">
|
||||
<div class="gc-text">
|
||||
These instructions are yours — derived from <em>your feedback, your language, your standards</em>. Copy them into your planning prompt and watch the deny rate drop.
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<footer>
|
||||
<p>Analysis of 202 denied plans and 168 annotation files from the Plannotator archive.</p>
|
||||
</footer>
|
||||
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function copyPrompt(btn) {
|
||||
const text = document.getElementById('prompt-content').textContent;
|
||||
navigator.clipboard.writeText(text).then(() => {
|
||||
btn.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><path d="M22 11.08V12a10 10 0 1 1-5.93-9.14"></path><polyline points="22 4 12 14.01 9 11.01"></polyline></svg> Copied';
|
||||
btn.classList.add('copied');
|
||||
setTimeout(() => {
|
||||
btn.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><rect x="9" y="9" width="13" height="13" rx="2" ry="2"></rect><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"></path></svg> Copy';
|
||||
btn.classList.remove('copied');
|
||||
}, 2000);
|
||||
});
|
||||
}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,282 @@
|
||||
# Claude Code Fallback
|
||||
|
||||
Read this file only when the user does **not** have a usable Plannotator archive.
|
||||
|
||||
This is the secondary path for ordinary Claude Code users whose denial history
|
||||
exists in `~/.claude/projects/` rather than `~/.plannotator/plans/`.
|
||||
|
||||
The goal is the same as the main skill:
|
||||
|
||||
- extract the user's real denial reasons
|
||||
- reduce them into a taxonomy and prompt corrections
|
||||
- produce the same HTML report design and section flow
|
||||
|
||||
## Source of Truth
|
||||
|
||||
Use the bundled parser at:
|
||||
|
||||
- [scripts/extract_exit_plan_mode_outcomes.py](../scripts/extract_exit_plan_mode_outcomes.py)
|
||||
|
||||
Resolve that script path relative to this skill directory before running it.
|
||||
|
||||
This script normalizes `ExitPlanMode` outcomes from Claude Code JSONL transcripts
|
||||
and emits clean JSON parts containing only human-authored denial reasons by default.
|
||||
|
||||
Do **not** read raw `~/.claude/projects/**/*.jsonl` directly unless:
|
||||
|
||||
- the parser fails
|
||||
- the user asks for audit-level verification
|
||||
- you need to inspect one or two suspicious records by hand
|
||||
|
||||
The parser exists specifically to strip transcript noise such as generic native
|
||||
reject strings and wrapper boilerplate.
|
||||
|
||||
## Run the Parser
|
||||
|
||||
Create the working directory first:
|
||||
|
||||
```bash
|
||||
mkdir -p /tmp/compound-planning
|
||||
```
|
||||
|
||||
Then run the bundled parser. Prefer `python3`; if unavailable, use `python`.
|
||||
|
||||
Use a resolved absolute script path, not a repo-local copy.
|
||||
|
||||
```bash
|
||||
python3 [RESOLVED SKILL PATH]/scripts/extract_exit_plan_mode_outcomes.py \
|
||||
--projects-dir ~/.claude/projects \
|
||||
--json-out /tmp/compound-planning/claude-code-human-reasons.json \
|
||||
--show-samples 0
|
||||
```
|
||||
|
||||
Expected output:
|
||||
|
||||
- manifest:
|
||||
`/tmp/compound-planning/claude-code-human-reasons/claude-code-human-reasons.manifest.json`
|
||||
- part files:
|
||||
`/tmp/compound-planning/claude-code-human-reasons/claude-code-human-reasons.part-XXXX-of-XXXX.json`
|
||||
|
||||
The script prints how many records were detected and how many JSON part files were emitted.
|
||||
|
||||
## What To Read First
|
||||
|
||||
Read the manifest before reading any part file.
|
||||
|
||||
The manifest gives you:
|
||||
|
||||
- total filtered record count
|
||||
- total `ExitPlanMode` attempts
|
||||
- native approval / denial counts
|
||||
- non-native denial counts
|
||||
- part file list
|
||||
|
||||
Use the part files only after you understand the overall dataset shape.
|
||||
|
||||
## Inventory In Fallback Mode
|
||||
|
||||
In Claude Code fallback mode, report this dataset instead of the Plannotator file counts:
|
||||
|
||||
- human denial reasons found
|
||||
- total `ExitPlanMode` attempts scanned
|
||||
- native approvals
|
||||
- native denials with extractable inline reason
|
||||
- native denials without recoverable reason
|
||||
- non-native denials with recoverable payload
|
||||
- number of emitted JSON parts
|
||||
- date range from the records
|
||||
- total days spanned
|
||||
- distinct sessions
|
||||
- distinct project roots / `cwd` values
|
||||
|
||||
Also calculate:
|
||||
|
||||
- average `plan_length_chars` where present
|
||||
- percentage of all denials that contain a recoverable human reason
|
||||
|
||||
Do **not** fabricate Plannotator-only inventory fields in fallback mode:
|
||||
|
||||
- no `*-approved.md` counts
|
||||
- no `*.annotations.md` counts
|
||||
- no `*.diff.md` counts
|
||||
- no approved-plan line-count analysis
|
||||
|
||||
If the user asks for those specifically, state that Claude Code log fallback mode
|
||||
does not contain those artifacts.
|
||||
|
||||
### Previous Report Detection In Fallback Mode
|
||||
|
||||
Previous report detection still applies. Check the user's home directory or
|
||||
`~/.plannotator/plans/` for existing `compound-planning-report*.html` files. If
|
||||
found, offer the same incremental vs full choice as Plannotator mode. In
|
||||
incremental mode, filter the parser output by timestamp rather than by filename
|
||||
date — use the `timestamp` field in each JSON record.
|
||||
|
||||
If no previous report exists, use the first-report naming convention
|
||||
(`compound-planning-report.html`). Otherwise use the next version number.
|
||||
|
||||
## Extraction In Fallback Mode
|
||||
|
||||
Treat the emitted JSON part files as the clean source dataset.
|
||||
|
||||
### Batching
|
||||
|
||||
- **Small datasets (< 200 records):** read the part files directly without extra agents
|
||||
- **Medium datasets (200-800 records):** split by part file or time range into 2-4 agents
|
||||
- **Large datasets (800+ records):** split by part file groups or balanced time ranges
|
||||
|
||||
All extraction agents should use `model: "haiku"` — they're doing straightforward
|
||||
file reading and structured extraction, not reasoning.
|
||||
|
||||
Each extraction agent should read every record in its assigned part files and write
|
||||
clean markdown output to:
|
||||
|
||||
```text
|
||||
/tmp/compound-planning/extraction-{batch-name}.md
|
||||
```
|
||||
|
||||
### Extraction Prompt For Claude Code Denial Records
|
||||
|
||||
Use this prompt for each fallback extraction batch (adapt the part files and output path):
|
||||
|
||||
```text
|
||||
You are extracting structured data from Claude Code ExitPlanMode denial records.
|
||||
|
||||
Files to read: [JSON PART FILES]
|
||||
Output: Write your complete results to [OUTPUT FILE PATH]
|
||||
|
||||
Read EVERY record in the assigned files. Each record already contains a cleaned
|
||||
human_reason field. Use that as the primary source text.
|
||||
|
||||
For EACH record, extract:
|
||||
- Date
|
||||
- Session ID
|
||||
- Project / cwd
|
||||
- Topic (only if inferable from the reason or plan path; otherwise say "Unknown from logs")
|
||||
- Human denial reason
|
||||
- What was specifically asked to change
|
||||
- Feedback type (let the content determine the category)
|
||||
- Notable phrases
|
||||
- Reason source (`native_inline_reason`, `non_native_freeform_payload`, or `structured_quote_extraction`)
|
||||
- Plan path if present
|
||||
- Plan length in chars if present
|
||||
|
||||
Do NOT skip any records. One entry per record.
|
||||
|
||||
Format each entry as:
|
||||
**[session_id :: tool_use_id]**
|
||||
- Date: ...
|
||||
- Project: ...
|
||||
- Topic: ...
|
||||
- Human denial reason: ...
|
||||
- Feedback type: ...
|
||||
- Specific asks: ...
|
||||
- Notable phrases: ...
|
||||
- Reason source: ...
|
||||
- Plan path: ...
|
||||
- Plan length chars: ...
|
||||
---
|
||||
|
||||
After processing all records, write the complete results to [OUTPUT FILE PATH].
|
||||
State the total record count at the end of the file.
|
||||
```
|
||||
|
||||
## Reduction In Fallback Mode
|
||||
|
||||
The reduction step stays conceptually the same:
|
||||
|
||||
- taxonomy
|
||||
- top patterns
|
||||
- recurring phrases
|
||||
- reviewer values
|
||||
- recurring agent mistakes
|
||||
- structural requests
|
||||
- evolution over time
|
||||
- corrective prompt instructions
|
||||
|
||||
Use `model: "sonnet"` for reduction agents, same as Plannotator mode. The
|
||||
two-stage reduce (partial reduces for 21+ extraction files) also applies when
|
||||
there are many part files.
|
||||
|
||||
But interpret the dataset correctly:
|
||||
|
||||
- this is denial-reason evidence from Claude Code logs
|
||||
- not every denial has a recoverable human reason
|
||||
- annotations may be absent entirely
|
||||
- success traits are often inferred from the inverse of repeated denial feedback
|
||||
|
||||
If the evidence for "what works" is weaker than the evidence for "what fails",
|
||||
say that explicitly.
|
||||
|
||||
## HTML Report Adaptation
|
||||
|
||||
Use the same template and the same section order as the main skill.
|
||||
|
||||
In fallback mode:
|
||||
|
||||
- explicitly state in the header/meta that the source is Claude Code `ExitPlanMode`
|
||||
denial reasons
|
||||
- keep the same narrative-first editorial style
|
||||
- keep the same 7 major sections
|
||||
- use real denial-reason counts, dates, phrases, and percentages only
|
||||
|
||||
### KPI Sidebar Substitutes
|
||||
|
||||
The Plannotator version uses a revision-rate KPI that may not exist here.
|
||||
|
||||
In fallback mode, prefer this KPI trio:
|
||||
|
||||
1. top denial category percentage
|
||||
2. total human denial reasons recovered
|
||||
3. number of distinct denial categories
|
||||
|
||||
If a better third metric emerges from the data, use it, but do not invent one.
|
||||
|
||||
### Footer / Provenance
|
||||
|
||||
The footer tagline should mention that the report was derived from Claude Code
|
||||
denial reasons rather than Plannotator markdown archives.
|
||||
|
||||
### Important Limitation To State
|
||||
|
||||
If `human_reasons_total < total denials`, mention in the narrative or footer note
|
||||
that some denials in the transcript did not contain recoverable human-authored
|
||||
feedback and therefore could not contribute to the pattern analysis.
|
||||
|
||||
### Versioned Report Naming
|
||||
|
||||
Versioned naming (`v2`, `v3`, etc.) applies to fallback mode too. Save reports
|
||||
to `~/.plannotator/plans/` (create the directory if it doesn't exist) so that
|
||||
all compound planning reports live in the same location regardless of data source.
|
||||
|
||||
## Summary In Fallback Mode
|
||||
|
||||
At the end, tell the user:
|
||||
|
||||
- how many human denial reasons were analyzed
|
||||
- how many total `ExitPlanMode` attempts were scanned
|
||||
- the top 3 denial patterns found
|
||||
- the estimated percentage of denial reasons the corrective instructions address
|
||||
- the single most impactful prompt improvement
|
||||
- where the report was saved (including version number)
|
||||
- if incremental: note that earlier findings are in the previous report
|
||||
|
||||
## Improvement Hook In Fallback Mode
|
||||
|
||||
The Phase 6 improvement hook applies to fallback mode too. The corrective prompt
|
||||
instructions derived from Claude Code denial reasons are just as useful for
|
||||
injection into future planning sessions. Follow the same flow as the main skill.
|
||||
|
||||
## Audit Mode
|
||||
|
||||
Only if the user asks for raw denial records or transcript noise:
|
||||
|
||||
```bash
|
||||
python3 [RESOLVED SKILL PATH]/scripts/extract_exit_plan_mode_outcomes.py \
|
||||
--projects-dir ~/.claude/projects \
|
||||
--records-filter denials \
|
||||
--json-out /tmp/compound-planning/claude-code-all-denials.json \
|
||||
--show-samples 0
|
||||
```
|
||||
|
||||
Do not use this audit-mode output for the normal report unless the user asks for it.
|
||||
@@ -0,0 +1,820 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Extract ExitPlanMode outcomes from Claude Code JSONL session logs.
|
||||
|
||||
This parser keeps three views of the same data:
|
||||
|
||||
1. Strict native Claude Code classification
|
||||
- native approval:
|
||||
"User has approved your plan."
|
||||
- native denial:
|
||||
"The user doesn't want to proceed with this tool use. The tool use was rejected"
|
||||
|
||||
2. General denial capture
|
||||
- any matching ExitPlanMode tool_result with is_error=true and non-empty text
|
||||
is captured as a denial/error payload, even when it is custom hook output
|
||||
or some other non-native integration.
|
||||
|
||||
3. Human-reason extraction
|
||||
- native inline reasons are preserved as-is
|
||||
- freeform non-native error payloads are treated as human reasons
|
||||
- structured non-native payloads are reduced to quoted feedback where possible
|
||||
|
||||
This means the script does not depend on hook-specific strings to capture custom
|
||||
denials, but it also does not dump wrapper boilerplate into the human-reason
|
||||
output.
|
||||
|
||||
The script streams JSONL line-by-line and uses only the Python standard library.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import asdict, dataclass
|
||||
from pathlib import Path
|
||||
from typing import Dict, Iterable, Iterator, List, Optional, Tuple
|
||||
|
||||
|
||||
APPROVE_PREFIX = "User has approved your plan."
|
||||
REJECT_PREFIX = (
|
||||
"The user doesn't want to proceed with this tool use. "
|
||||
"The tool use was rejected"
|
||||
)
|
||||
REASON_MARKER = "To tell you how to proceed, the user said:\n"
|
||||
NOTE_MARKER = (
|
||||
"\n\nNote: The user's next message may contain a correction or preference."
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class AttemptRecord:
|
||||
session_id: str
|
||||
tool_use_id: str
|
||||
file_path: str
|
||||
line_number: int
|
||||
timestamp: Optional[str]
|
||||
cwd: Optional[str]
|
||||
plan_file_path: Optional[str]
|
||||
plan_length_chars: Optional[int]
|
||||
outcome: str = "pending"
|
||||
native_reason: Optional[str] = None
|
||||
native_reason_style: Optional[str] = None
|
||||
captured_reason: Optional[str] = None
|
||||
captured_reason_style: Optional[str] = None
|
||||
captured_reason_source: Optional[str] = None
|
||||
human_reason: Optional[str] = None
|
||||
human_reason_style: Optional[str] = None
|
||||
human_reason_source: Optional[str] = None
|
||||
result_is_error: Optional[bool] = None
|
||||
result_file_path: Optional[str] = None
|
||||
result_line_number: Optional[int] = None
|
||||
result_timestamp: Optional[str] = None
|
||||
result_preview: Optional[str] = None
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Extract ExitPlanMode approvals/denials from Claude Code logs."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--projects-dir",
|
||||
default="~/.claude/projects",
|
||||
help="Root Claude projects directory. Default: %(default)s",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--include-subagents",
|
||||
action="store_true",
|
||||
help="Include /subagents/ JSONL files. Default is to skip them.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--records-filter",
|
||||
choices=("all", "native", "native-denials", "denials", "human-reasons"),
|
||||
default="human-reasons",
|
||||
help=(
|
||||
"Which records to write to JSON/CSV outputs. "
|
||||
"Default: %(default)s"
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--include-non-native-denials",
|
||||
action="store_true",
|
||||
help=(
|
||||
"Include non-native denial/error payloads in sample output. "
|
||||
"Default sample output shows only native denials."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--show-samples",
|
||||
type=int,
|
||||
default=5,
|
||||
help="How many denial samples to print in the text summary.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--json-out",
|
||||
help="Optional path to write a JSON report.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max-output-tokens-per-file",
|
||||
type=int,
|
||||
default=50000,
|
||||
help=(
|
||||
"Approximate max token budget per JSON file when writing --json-out. "
|
||||
"Default: %(default)s"
|
||||
),
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def iter_jsonl_files(root: Path, include_subagents: bool) -> Iterator[Path]:
|
||||
for dirpath, dirnames, filenames in os.walk(root):
|
||||
if not include_subagents and "subagents" in dirnames:
|
||||
dirnames.remove("subagents")
|
||||
dirnames.sort()
|
||||
for filename in sorted(filenames):
|
||||
if filename.endswith(".jsonl"):
|
||||
yield Path(dirpath) / filename
|
||||
|
||||
|
||||
def make_attempt_key(session_id: str, tool_use_id: str) -> str:
|
||||
return session_id + "::" + tool_use_id
|
||||
|
||||
|
||||
def preview(text: str, limit: int = 220) -> str:
|
||||
compact = " ".join(text.split())
|
||||
if len(compact) <= limit:
|
||||
return compact
|
||||
return compact[: limit - 3] + "..."
|
||||
|
||||
|
||||
def estimate_tokens(text: str) -> int:
|
||||
# Rough enough for output chunking. We intentionally bias slightly high.
|
||||
return max(1, (len(text) + 3) // 4)
|
||||
|
||||
|
||||
def iter_blocks(message_content: object) -> Iterator[dict]:
|
||||
if not isinstance(message_content, list):
|
||||
return
|
||||
for block in message_content:
|
||||
if isinstance(block, dict):
|
||||
yield block
|
||||
|
||||
|
||||
def extract_text(content: object) -> str:
|
||||
if isinstance(content, str):
|
||||
return content
|
||||
if not isinstance(content, list):
|
||||
return ""
|
||||
|
||||
parts: List[str] = []
|
||||
for item in content:
|
||||
if isinstance(item, str):
|
||||
parts.append(item)
|
||||
continue
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
if isinstance(item.get("text"), str):
|
||||
parts.append(item["text"])
|
||||
elif isinstance(item.get("content"), str):
|
||||
parts.append(item["content"])
|
||||
return "\n".join(part for part in parts if part)
|
||||
|
||||
|
||||
def classify_reason_style(reason: Optional[str]) -> Optional[str]:
|
||||
if not reason:
|
||||
return None
|
||||
|
||||
stripped = reason.lstrip()
|
||||
if (
|
||||
stripped.startswith("#")
|
||||
or stripped.startswith("YOUR PLAN WAS NOT APPROVED.")
|
||||
or "\n## " in reason
|
||||
or "\n---" in reason
|
||||
):
|
||||
return "structured"
|
||||
return "freeform"
|
||||
|
||||
|
||||
def extract_blockquote_feedback(text: str) -> List[str]:
|
||||
quotes: List[str] = []
|
||||
current: List[str] = []
|
||||
|
||||
for raw_line in text.splitlines():
|
||||
stripped = raw_line.strip()
|
||||
if stripped.startswith(">"):
|
||||
current.append(stripped[1:].lstrip())
|
||||
continue
|
||||
|
||||
if current:
|
||||
if not stripped or stripped.startswith("## ") or stripped == "---":
|
||||
quote = "\n".join(line for line in current if line).strip()
|
||||
if quote:
|
||||
quotes.append(quote)
|
||||
current = []
|
||||
continue
|
||||
|
||||
# Preserve wrapped continuation lines that belong to the same quote.
|
||||
current.append(stripped)
|
||||
|
||||
if current:
|
||||
quote = "\n".join(line for line in current if line).strip()
|
||||
if quote:
|
||||
quotes.append(quote)
|
||||
|
||||
return quotes
|
||||
|
||||
|
||||
def extract_human_reason(
|
||||
native_reason: Optional[str],
|
||||
captured_reason: Optional[str],
|
||||
captured_reason_style: Optional[str],
|
||||
) -> Tuple[Optional[str], Optional[str], Optional[str]]:
|
||||
if native_reason:
|
||||
return (
|
||||
native_reason,
|
||||
classify_reason_style(native_reason),
|
||||
"native_inline_reason",
|
||||
)
|
||||
|
||||
if not captured_reason:
|
||||
return (None, None, None)
|
||||
|
||||
if captured_reason_style == "freeform":
|
||||
return (
|
||||
captured_reason,
|
||||
classify_reason_style(captured_reason),
|
||||
"non_native_freeform_payload",
|
||||
)
|
||||
|
||||
quote_feedback = extract_blockquote_feedback(captured_reason)
|
||||
if quote_feedback:
|
||||
reason = "\n\n".join(quote_feedback)
|
||||
return (
|
||||
reason,
|
||||
classify_reason_style(reason),
|
||||
"structured_quote_extraction",
|
||||
)
|
||||
|
||||
return (None, None, None)
|
||||
|
||||
|
||||
def classify_result(
|
||||
text: str,
|
||||
is_error: bool,
|
||||
) -> Tuple[str, Optional[str], Optional[str], Optional[str], Optional[str]]:
|
||||
stripped = text.strip()
|
||||
if not stripped:
|
||||
if is_error:
|
||||
return (
|
||||
"denied_non_native_no_payload",
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
)
|
||||
return ("pending", None, None, None, None)
|
||||
|
||||
if stripped.startswith(APPROVE_PREFIX):
|
||||
return ("approved_native", None, None, None, None)
|
||||
|
||||
if stripped.startswith(REJECT_PREFIX):
|
||||
marker_index = stripped.find(REASON_MARKER)
|
||||
if marker_index < 0:
|
||||
return ("denied_native_no_reason", None, None, None, None)
|
||||
|
||||
reason = stripped[marker_index + len(REASON_MARKER) :]
|
||||
note_index = reason.find(NOTE_MARKER)
|
||||
if note_index >= 0:
|
||||
reason = reason[:note_index]
|
||||
reason = reason.strip()
|
||||
if reason:
|
||||
style = classify_reason_style(reason)
|
||||
return (
|
||||
"denied_native_with_reason",
|
||||
reason,
|
||||
reason,
|
||||
"native_inline_reason",
|
||||
style,
|
||||
)
|
||||
return ("denied_native_no_reason", None, None, None, None)
|
||||
|
||||
if is_error:
|
||||
style = classify_reason_style(stripped)
|
||||
return (
|
||||
"denied_non_native_with_payload",
|
||||
None,
|
||||
stripped,
|
||||
"non_native_error_payload",
|
||||
style,
|
||||
)
|
||||
|
||||
return ("non_native_other", None, None, None, None)
|
||||
|
||||
|
||||
def outcome_rank(outcome: str) -> int:
|
||||
ranks = {
|
||||
"pending": 0,
|
||||
"non_native_other": 1,
|
||||
"approved_native": 2,
|
||||
"denied_native_no_reason": 3,
|
||||
"denied_native_with_reason": 4,
|
||||
"denied_non_native_no_payload": 5,
|
||||
"denied_non_native_with_payload": 6,
|
||||
}
|
||||
return ranks.get(outcome, 0)
|
||||
|
||||
|
||||
def update_attempt_from_result(
|
||||
attempt: AttemptRecord,
|
||||
file_path: Path,
|
||||
line_number: int,
|
||||
timestamp: Optional[str],
|
||||
text: str,
|
||||
is_error: bool,
|
||||
) -> None:
|
||||
(
|
||||
outcome,
|
||||
native_reason,
|
||||
captured_reason,
|
||||
captured_reason_source,
|
||||
captured_reason_style,
|
||||
) = classify_result(text=text, is_error=is_error)
|
||||
if outcome_rank(outcome) < outcome_rank(attempt.outcome):
|
||||
return
|
||||
|
||||
attempt.outcome = outcome
|
||||
attempt.native_reason = native_reason
|
||||
attempt.native_reason_style = classify_reason_style(native_reason)
|
||||
attempt.captured_reason = captured_reason
|
||||
attempt.captured_reason_source = captured_reason_source
|
||||
attempt.captured_reason_style = captured_reason_style
|
||||
(
|
||||
attempt.human_reason,
|
||||
attempt.human_reason_style,
|
||||
attempt.human_reason_source,
|
||||
) = extract_human_reason(
|
||||
native_reason=native_reason,
|
||||
captured_reason=captured_reason,
|
||||
captured_reason_style=captured_reason_style,
|
||||
)
|
||||
attempt.result_is_error = is_error
|
||||
attempt.result_file_path = str(file_path)
|
||||
attempt.result_line_number = line_number
|
||||
attempt.result_timestamp = timestamp
|
||||
attempt.result_preview = preview(text)
|
||||
|
||||
|
||||
def scan_projects(
|
||||
projects_dir: Path,
|
||||
include_subagents: bool,
|
||||
) -> Tuple[Dict[str, int], List[AttemptRecord]]:
|
||||
stats = {
|
||||
"files_scanned": 0,
|
||||
"lines_scanned": 0,
|
||||
"json_errors": 0,
|
||||
}
|
||||
attempts: Dict[str, AttemptRecord] = {}
|
||||
|
||||
for file_path in iter_jsonl_files(projects_dir, include_subagents):
|
||||
stats["files_scanned"] += 1
|
||||
try:
|
||||
handle = file_path.open("r", encoding="utf-8", errors="replace")
|
||||
except OSError:
|
||||
continue
|
||||
|
||||
with handle:
|
||||
for line_number, raw_line in enumerate(handle, start=1):
|
||||
if not raw_line.strip():
|
||||
continue
|
||||
stats["lines_scanned"] += 1
|
||||
try:
|
||||
obj = json.loads(raw_line)
|
||||
except json.JSONDecodeError:
|
||||
stats["json_errors"] += 1
|
||||
continue
|
||||
|
||||
session_id = str(obj.get("sessionId") or str(file_path))
|
||||
timestamp = obj.get("timestamp")
|
||||
cwd = obj.get("cwd")
|
||||
message = obj.get("message")
|
||||
if not isinstance(message, dict):
|
||||
continue
|
||||
|
||||
content = message.get("content")
|
||||
|
||||
for block in iter_blocks(content):
|
||||
if (
|
||||
block.get("type") == "tool_use"
|
||||
and block.get("name") == "ExitPlanMode"
|
||||
and isinstance(block.get("id"), str)
|
||||
):
|
||||
tool_use_id = block["id"]
|
||||
key = make_attempt_key(session_id, tool_use_id)
|
||||
if key in attempts:
|
||||
continue
|
||||
input_data = block.get("input")
|
||||
plan = None
|
||||
plan_file_path = None
|
||||
if isinstance(input_data, dict):
|
||||
if isinstance(input_data.get("plan"), str):
|
||||
plan = input_data["plan"]
|
||||
if isinstance(input_data.get("planFilePath"), str):
|
||||
plan_file_path = input_data["planFilePath"]
|
||||
|
||||
attempts[key] = AttemptRecord(
|
||||
session_id=session_id,
|
||||
tool_use_id=tool_use_id,
|
||||
file_path=str(file_path),
|
||||
line_number=line_number,
|
||||
timestamp=timestamp if isinstance(timestamp, str) else None,
|
||||
cwd=cwd if isinstance(cwd, str) else None,
|
||||
plan_file_path=plan_file_path,
|
||||
plan_length_chars=len(plan) if isinstance(plan, str) else None,
|
||||
)
|
||||
|
||||
if message.get("role") != "user":
|
||||
continue
|
||||
|
||||
for block in iter_blocks(content):
|
||||
if (
|
||||
block.get("type") != "tool_result"
|
||||
or not isinstance(block.get("tool_use_id"), str)
|
||||
):
|
||||
continue
|
||||
|
||||
key = make_attempt_key(session_id, block["tool_use_id"])
|
||||
attempt = attempts.get(key)
|
||||
if attempt is None:
|
||||
continue
|
||||
|
||||
text = extract_text(block.get("content"))
|
||||
update_attempt_from_result(
|
||||
attempt=attempt,
|
||||
file_path=file_path,
|
||||
line_number=line_number,
|
||||
timestamp=timestamp if isinstance(timestamp, str) else None,
|
||||
text=text,
|
||||
is_error=bool(block.get("is_error")),
|
||||
)
|
||||
|
||||
return stats, list(attempts.values())
|
||||
|
||||
|
||||
def summarize(attempts: Iterable[AttemptRecord]) -> Dict[str, int]:
|
||||
summary = {
|
||||
"total_exit_plan_attempts": 0,
|
||||
"approved_native": 0,
|
||||
"denied_native_with_reason": 0,
|
||||
"denied_native_no_reason": 0,
|
||||
"denied_native_with_freeform_reason": 0,
|
||||
"denied_native_with_structured_reason": 0,
|
||||
"denied_non_native_with_payload": 0,
|
||||
"denied_non_native_no_payload": 0,
|
||||
"captured_denial_reasons_total": 0,
|
||||
"captured_freeform_reasons": 0,
|
||||
"captured_structured_reasons": 0,
|
||||
"human_reasons_total": 0,
|
||||
"human_reasons_native": 0,
|
||||
"human_reasons_non_native": 0,
|
||||
"human_reasons_freeform": 0,
|
||||
"human_reasons_structured": 0,
|
||||
"non_native_other": 0,
|
||||
"pending": 0,
|
||||
}
|
||||
for attempt in attempts:
|
||||
summary["total_exit_plan_attempts"] += 1
|
||||
summary[attempt.outcome] = summary.get(attempt.outcome, 0) + 1
|
||||
if attempt.outcome == "denied_native_with_reason":
|
||||
if attempt.native_reason_style == "freeform":
|
||||
summary["denied_native_with_freeform_reason"] += 1
|
||||
elif attempt.native_reason_style == "structured":
|
||||
summary["denied_native_with_structured_reason"] += 1
|
||||
if attempt.captured_reason:
|
||||
summary["captured_denial_reasons_total"] += 1
|
||||
if attempt.captured_reason_style == "freeform":
|
||||
summary["captured_freeform_reasons"] += 1
|
||||
elif attempt.captured_reason_style == "structured":
|
||||
summary["captured_structured_reasons"] += 1
|
||||
if attempt.human_reason:
|
||||
summary["human_reasons_total"] += 1
|
||||
if attempt.human_reason_source == "native_inline_reason":
|
||||
summary["human_reasons_native"] += 1
|
||||
else:
|
||||
summary["human_reasons_non_native"] += 1
|
||||
if attempt.human_reason_style == "freeform":
|
||||
summary["human_reasons_freeform"] += 1
|
||||
elif attempt.human_reason_style == "structured":
|
||||
summary["human_reasons_structured"] += 1
|
||||
return summary
|
||||
|
||||
|
||||
def filter_records(
|
||||
attempts: List[AttemptRecord],
|
||||
records_filter: str,
|
||||
) -> List[AttemptRecord]:
|
||||
if records_filter == "all":
|
||||
return attempts
|
||||
if records_filter == "native":
|
||||
return [
|
||||
attempt
|
||||
for attempt in attempts
|
||||
if attempt.outcome.startswith("approved_native")
|
||||
or attempt.outcome.startswith("denied_native")
|
||||
]
|
||||
if records_filter == "native-denials":
|
||||
return [
|
||||
attempt
|
||||
for attempt in attempts
|
||||
if attempt.outcome.startswith("denied_native")
|
||||
]
|
||||
if records_filter == "human-reasons":
|
||||
return [attempt for attempt in attempts if attempt.human_reason]
|
||||
return [
|
||||
attempt
|
||||
for attempt in attempts
|
||||
if attempt.outcome.startswith("denied_native")
|
||||
or attempt.outcome.startswith("denied_non_native")
|
||||
]
|
||||
|
||||
|
||||
def build_json_chunks(
|
||||
records: List[AttemptRecord],
|
||||
max_output_tokens_per_file: int,
|
||||
) -> List[List[AttemptRecord]]:
|
||||
if not records:
|
||||
return [[]]
|
||||
|
||||
chunks: List[List[AttemptRecord]] = []
|
||||
current_chunk: List[AttemptRecord] = []
|
||||
current_tokens = 0
|
||||
|
||||
for record in records:
|
||||
record_dict = asdict(record)
|
||||
record_json = json.dumps(record_dict, ensure_ascii=False)
|
||||
record_tokens = estimate_tokens(record_json)
|
||||
|
||||
if current_chunk and current_tokens + record_tokens > max_output_tokens_per_file:
|
||||
chunks.append(current_chunk)
|
||||
current_chunk = []
|
||||
current_tokens = 0
|
||||
|
||||
current_chunk.append(record)
|
||||
current_tokens += record_tokens
|
||||
|
||||
if current_chunk:
|
||||
chunks.append(current_chunk)
|
||||
|
||||
return chunks
|
||||
|
||||
|
||||
def print_summary(
|
||||
projects_dir: Path,
|
||||
include_subagents: bool,
|
||||
stats: Dict[str, int],
|
||||
attempts: List[AttemptRecord],
|
||||
summary: Dict[str, int],
|
||||
show_samples: int,
|
||||
include_non_native_denials: bool,
|
||||
) -> None:
|
||||
native_denials = (
|
||||
summary["denied_native_with_reason"] + summary["denied_native_no_reason"]
|
||||
)
|
||||
total_denials = (
|
||||
native_denials
|
||||
+ summary["denied_non_native_with_payload"]
|
||||
+ summary["denied_non_native_no_payload"]
|
||||
)
|
||||
native_extractable_ratio = (
|
||||
(summary["denied_native_with_reason"] / native_denials) * 100.0
|
||||
if native_denials
|
||||
else 0.0
|
||||
)
|
||||
all_capture_ratio = (
|
||||
(summary["captured_denial_reasons_total"] / total_denials) * 100.0
|
||||
if total_denials
|
||||
else 0.0
|
||||
)
|
||||
|
||||
print(f"Projects dir: {projects_dir}")
|
||||
print(f"Included subagents: {'yes' if include_subagents else 'no'}")
|
||||
print(f"JSONL files scanned: {stats['files_scanned']}")
|
||||
print(f"JSON lines scanned: {stats['lines_scanned']}")
|
||||
print(f"JSON parse errors: {stats['json_errors']}")
|
||||
print()
|
||||
print(f"ExitPlanMode attempts: {summary['total_exit_plan_attempts']}")
|
||||
print(f"Native approvals: {summary['approved_native']}")
|
||||
print(
|
||||
"Native denials with extractable reason: "
|
||||
f"{summary['denied_native_with_reason']}"
|
||||
)
|
||||
print(
|
||||
"Native denials without reason: "
|
||||
f"{summary['denied_native_no_reason']}"
|
||||
)
|
||||
print(
|
||||
"Freeform native reasons: "
|
||||
f"{summary['denied_native_with_freeform_reason']}"
|
||||
)
|
||||
print(
|
||||
"Structured native reasons: "
|
||||
f"{summary['denied_native_with_structured_reason']}"
|
||||
)
|
||||
print(
|
||||
"Non-native denials with payload: "
|
||||
f"{summary['denied_non_native_with_payload']}"
|
||||
)
|
||||
print(
|
||||
"Non-native denials without payload: "
|
||||
f"{summary['denied_non_native_no_payload']}"
|
||||
)
|
||||
print(
|
||||
"Captured denial reasons total: "
|
||||
f"{summary['captured_denial_reasons_total']}"
|
||||
)
|
||||
print(
|
||||
"Captured freeform reasons: "
|
||||
f"{summary['captured_freeform_reasons']}"
|
||||
)
|
||||
print(
|
||||
"Captured structured reasons: "
|
||||
f"{summary['captured_structured_reasons']}"
|
||||
)
|
||||
print(f"Human reasons total: {summary['human_reasons_total']}")
|
||||
print(f"Human reasons from native denials: {summary['human_reasons_native']}")
|
||||
print(
|
||||
"Human reasons from non-native denials: "
|
||||
f"{summary['human_reasons_non_native']}"
|
||||
)
|
||||
print(
|
||||
"Non-native / non-denial outcomes: "
|
||||
f"{summary['non_native_other']}"
|
||||
)
|
||||
print(f"Pending / unmatched attempts: {summary['pending']}")
|
||||
print()
|
||||
print(
|
||||
"Extractable native denial reasons: "
|
||||
f"{summary['denied_native_with_reason']}/{native_denials} "
|
||||
f"({native_extractable_ratio:.1f}%)"
|
||||
)
|
||||
print(
|
||||
"Captured denial payloads across all denial types: "
|
||||
f"{summary['captured_denial_reasons_total']}/{total_denials} "
|
||||
f"({all_capture_ratio:.1f}%)"
|
||||
)
|
||||
print(
|
||||
"Human reasons across all denial types: "
|
||||
f"{summary['human_reasons_total']}/{total_denials} "
|
||||
f"({((summary['human_reasons_total'] / total_denials) * 100.0 if total_denials else 0.0):.1f}%)"
|
||||
)
|
||||
|
||||
if include_non_native_denials:
|
||||
samples = [attempt for attempt in attempts if attempt.human_reason]
|
||||
else:
|
||||
samples = [
|
||||
attempt
|
||||
for attempt in attempts
|
||||
if attempt.outcome == "denied_native_with_reason" and attempt.human_reason
|
||||
]
|
||||
samples = samples[: max(show_samples, 0)]
|
||||
if not samples:
|
||||
return
|
||||
|
||||
print()
|
||||
print(
|
||||
"Sample denial reasons:"
|
||||
if include_non_native_denials
|
||||
else "Sample native denial reasons:"
|
||||
)
|
||||
for attempt in samples:
|
||||
style = attempt.human_reason_style or "unknown"
|
||||
source = attempt.human_reason_source or "unknown"
|
||||
reason = attempt.human_reason or ""
|
||||
print(
|
||||
"- "
|
||||
f"[{attempt.outcome} / {source} / {style}] "
|
||||
f"{reason!r} "
|
||||
f"({attempt.file_path}:{attempt.result_line_number})"
|
||||
)
|
||||
|
||||
|
||||
def write_json_report(
|
||||
output_path: Path,
|
||||
projects_dir: Path,
|
||||
include_subagents: bool,
|
||||
stats: Dict[str, int],
|
||||
summary: Dict[str, int],
|
||||
records: List[AttemptRecord],
|
||||
max_output_tokens_per_file: int,
|
||||
) -> List[Path]:
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
chunks = build_json_chunks(records, max_output_tokens_per_file)
|
||||
base_name = output_path.stem
|
||||
output_dir = output_path.with_suffix("")
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
written_files: List[Path] = []
|
||||
part_summaries = []
|
||||
|
||||
for index, chunk in enumerate(chunks, start=1):
|
||||
chunk_records = [asdict(record) for record in chunk]
|
||||
chunk_payload = {
|
||||
"projects_dir": str(projects_dir),
|
||||
"include_subagents": include_subagents,
|
||||
"stats": stats,
|
||||
"summary": summary,
|
||||
"part_index": index,
|
||||
"part_count": len(chunks),
|
||||
"record_count": len(chunk_records),
|
||||
"records": chunk_records,
|
||||
}
|
||||
part_name = f"{base_name}.part-{index:04d}-of-{len(chunks):04d}.json"
|
||||
part_path = output_dir / part_name
|
||||
part_path.write_text(
|
||||
json.dumps(chunk_payload, indent=2, ensure_ascii=False),
|
||||
encoding="utf-8",
|
||||
)
|
||||
written_files.append(part_path)
|
||||
part_summaries.append(
|
||||
{
|
||||
"part_index": index,
|
||||
"file_name": part_name,
|
||||
"record_count": len(chunk_records),
|
||||
}
|
||||
)
|
||||
|
||||
manifest_payload = {
|
||||
"projects_dir": str(projects_dir),
|
||||
"include_subagents": include_subagents,
|
||||
"stats": stats,
|
||||
"summary": summary,
|
||||
"records_filter_record_count": len(records),
|
||||
"part_count": len(chunks),
|
||||
"max_output_tokens_per_file": max_output_tokens_per_file,
|
||||
"parts": part_summaries,
|
||||
}
|
||||
manifest_path = output_dir / f"{base_name}.manifest.json"
|
||||
manifest_path.write_text(
|
||||
json.dumps(manifest_payload, indent=2, ensure_ascii=False),
|
||||
encoding="utf-8",
|
||||
)
|
||||
written_files.insert(0, manifest_path)
|
||||
|
||||
return written_files
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = parse_args()
|
||||
projects_dir = Path(args.projects_dir).expanduser()
|
||||
if not projects_dir.exists():
|
||||
print(f"Projects dir does not exist: {projects_dir}", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
stats, attempts = scan_projects(
|
||||
projects_dir=projects_dir,
|
||||
include_subagents=args.include_subagents,
|
||||
)
|
||||
attempts.sort(
|
||||
key=lambda attempt: (
|
||||
attempt.file_path,
|
||||
attempt.line_number,
|
||||
attempt.tool_use_id,
|
||||
)
|
||||
)
|
||||
summary = summarize(attempts)
|
||||
records = filter_records(attempts, args.records_filter)
|
||||
|
||||
print_summary(
|
||||
projects_dir=projects_dir,
|
||||
include_subagents=args.include_subagents,
|
||||
stats=stats,
|
||||
attempts=attempts,
|
||||
summary=summary,
|
||||
show_samples=args.show_samples,
|
||||
include_non_native_denials=args.include_non_native_denials,
|
||||
)
|
||||
|
||||
if args.json_out:
|
||||
written_files = write_json_report(
|
||||
output_path=Path(args.json_out).expanduser(),
|
||||
projects_dir=projects_dir,
|
||||
include_subagents=args.include_subagents,
|
||||
stats=stats,
|
||||
summary=summary,
|
||||
records=records,
|
||||
max_output_tokens_per_file=args.max_output_tokens_per_file,
|
||||
)
|
||||
part_count = max(len(written_files) - 1, 0)
|
||||
print()
|
||||
print(
|
||||
"Wrote JSON output: "
|
||||
f"detected {len(records)} records for filter '{args.records_filter}' "
|
||||
f"and emitted {part_count} part file(s) plus a manifest."
|
||||
)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
23
extensions/plannotator/skills/plannotator-last/SKILL.md
Normal file
23
extensions/plannotator/skills/plannotator-last/SKILL.md
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
name: plannotator-last
|
||||
description: Open Plannotator on the latest rendered assistant message and use the returned annotations to revise that message or continue.
|
||||
---
|
||||
|
||||
# Plannotator Last
|
||||
|
||||
Use this skill when the user wants to annotate the latest assistant response in Plannotator.
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
plannotator last
|
||||
```
|
||||
|
||||
Behavior:
|
||||
|
||||
1. Launch the command with Bash.
|
||||
2. Wait for the annotation session to finish.
|
||||
3. If feedback is returned, incorporate it into the follow-up response.
|
||||
4. If the session closes without feedback, mention that briefly and continue.
|
||||
|
||||
Run the command yourself rather than telling the user to invoke shell syntax manually.
|
||||
@@ -0,0 +1,5 @@
|
||||
interface:
|
||||
display_name: "Plannotator Last"
|
||||
short_description: "Annotate the latest assistant message in Plannotator."
|
||||
policy:
|
||||
allow_implicit_invocation: false
|
||||
23
extensions/plannotator/skills/plannotator-review/SKILL.md
Normal file
23
extensions/plannotator/skills/plannotator-review/SKILL.md
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
name: plannotator-review
|
||||
description: Open Plannotator's browser-based code review UI for the current worktree or a pull request URL, then act on the feedback that comes back.
|
||||
---
|
||||
|
||||
# Plannotator Review
|
||||
|
||||
Use this skill when the user wants to review current code changes in Plannotator instead of reading a diff inline.
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
plannotator review [optional-pr-url]
|
||||
```
|
||||
|
||||
Behavior:
|
||||
|
||||
1. Launch the command with Bash.
|
||||
2. Wait for it to finish.
|
||||
3. If it returns feedback or annotations, address them in the same conversation.
|
||||
4. If it returns an approval/LGTM-style message, acknowledge that review passed and continue.
|
||||
|
||||
Do not ask the user to copy shell commands into chat. Run the command yourself.
|
||||
@@ -0,0 +1,5 @@
|
||||
interface:
|
||||
display_name: "Plannotator Review"
|
||||
short_description: "Open Plannotator code review for local changes or a PR."
|
||||
policy:
|
||||
allow_implicit_invocation: false
|
||||
@@ -0,0 +1,89 @@
|
||||
---
|
||||
name: plannotator-setup-goal
|
||||
description: Create reviewed Codex goal setup packages for long-running /goal work. Use when the user wants to turn an idea, backlog, project mission, or vague objective into durable goal files under a project goals slug folder, with Plannotator review gates for brief, narrative plan with acceptance criteria, verification, blockers, and the final /goal prompt.
|
||||
---
|
||||
|
||||
# Plannotator Setup Goal
|
||||
|
||||
## Overview
|
||||
|
||||
Create a durable goal package in the current project at `goals/<slug>/` so Codex `/goal` has a clear mission, guardrails, proof of done, and external memory. Use Plannotator as the user review UI: every critical document must be gated with `plannotator annotate <document.md> --gate` and revised until approved.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Confirm the working directory is the project root, or use the user-provided project directory.
|
||||
2. Gather enough context to name the goal, define the intended outcome, identify constraints, find likely project docs, and determine proof of done.
|
||||
3. Ask focused questions whenever the goal is vague, risky, too broad, missing a finish line, or missing verification. Do not proceed with guessed critical requirements.
|
||||
4. Create a slug from the goal name and scaffold `goals/<slug>/` with:
|
||||
|
||||
```bash
|
||||
python3 <skill_dir>/scripts/scaffold_goal.py --root . --slug <slug> --title "<goal title>" --objective "<one sentence outcome>"
|
||||
```
|
||||
|
||||
5. Draft and refine the critical documents in this order:
|
||||
- `brief.md`
|
||||
- `plan.md`
|
||||
- `verification.md`
|
||||
- `blockers.md`
|
||||
- `goal-prompt.md`
|
||||
6. Gate each critical document with Plannotator before moving on:
|
||||
|
||||
```bash
|
||||
plannotator annotate goals/<slug>/<document.md> --gate
|
||||
```
|
||||
|
||||
7. If Plannotator returns denial, comments, or markup, treat that as user feedback. Revise the document, then run the same gate again. Continue until approved.
|
||||
8. After all gates pass, present the final path and the exact `/goal` prompt from `goal-prompt.md`.
|
||||
|
||||
## Document Standards
|
||||
|
||||
`brief.md` must state the mission, context, constraints, non-goals, ask-before rules, and concise done condition.
|
||||
|
||||
`plan.md` is the central reviewed planning artifact. It must read like a clear solution narrative, not just a technical checklist. Include what is being built, why this approach is appropriate, how the solution will work, the main implementation slices, risks, phase boundaries, and acceptance criteria. Every important acceptance item needs observable evidence. For large missions, prefer several sequential goals over one endless goal.
|
||||
|
||||
`verification.md` must list exact verification commands and manual checks. Include expected pass conditions and where evidence should be recorded.
|
||||
|
||||
`blockers.md` must capture open questions, user-decision points, dangerous operations that require approval, and conditions that should pause the goal.
|
||||
|
||||
`goal-prompt.md` must contain the final command the user can paste into Codex. It should reference the goal package files as the durable source of truth, tell Codex to append evidence to `progress.jsonl`, and define when to stop or ask.
|
||||
|
||||
`progress.jsonl` is append-only evidence. Do not gate it. During execution, append concrete progress and proof, not summaries of intent.
|
||||
|
||||
## Plannotator Rules
|
||||
|
||||
Use Plannotator as the review surface, not as a passive preview. The command `plannotator annotate <document.md> --gate` presents the document to the user and captures approval or denial feedback.
|
||||
|
||||
Do not skip gates for critical documents. Do not mark a document ready because it seems reasonable. The user must approve it through the gate.
|
||||
|
||||
If a document is denied, update the document from the captured feedback and rerun the gate. Keep the loop tight: one document, one review, one revision cycle.
|
||||
|
||||
## Goal Prompt Rules
|
||||
|
||||
Write the final `/goal` prompt as a compact product brief, not a raw todo dump.
|
||||
|
||||
Include:
|
||||
- outcome
|
||||
- relevant files
|
||||
- constraints and non-goals
|
||||
- plan acceptance criteria and evidence
|
||||
- verification commands
|
||||
- ask-before rules
|
||||
- instruction to use `goals/<slug>/` as the durable plan and append evidence to `progress.jsonl`
|
||||
|
||||
Avoid:
|
||||
- open-ended improvement loops
|
||||
- mixed unrelated missions
|
||||
- vague words like "improve" without measurable proof
|
||||
- instructions to keep working forever
|
||||
- hidden assumptions that are not written into the files
|
||||
|
||||
## Quality Checks
|
||||
|
||||
Before finalizing, verify:
|
||||
- The goal has one clear finish line.
|
||||
- The plan explains what, why, and how before listing work slices.
|
||||
- The plan acceptance criteria can be audited from real artifacts.
|
||||
- Verification commands are concrete.
|
||||
- Risky actions have ask-before rules.
|
||||
- The final `/goal` prompt tells Codex where the goal files live.
|
||||
- All critical documents have passed Plannotator gates.
|
||||
@@ -0,0 +1,4 @@
|
||||
interface:
|
||||
display_name: "Plannotator Goal Setup"
|
||||
short_description: "Build reviewed Codex goal packages"
|
||||
default_prompt: "Use $plannotator-setup-goal to create a reviewed goal package for this project."
|
||||
219
extensions/plannotator/skills/plannotator-setup-goal/scripts/scaffold_goal.py
Executable file
219
extensions/plannotator/skills/plannotator-setup-goal/scripts/scaffold_goal.py
Executable file
@@ -0,0 +1,219 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Scaffold a reviewed Codex goal package under goals/<slug>/."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import datetime as dt
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def slugify(value: str) -> str:
|
||||
slug = re.sub(r"[^a-z0-9]+", "-", value.strip().lower()).strip("-")
|
||||
slug = re.sub(r"-{2,}", "-", slug)
|
||||
return slug or "goal"
|
||||
|
||||
|
||||
def write_file(path: Path, content: str, force: bool) -> None:
|
||||
if path.exists() and not force:
|
||||
return
|
||||
path.write_text(content, encoding="utf-8")
|
||||
|
||||
|
||||
def brief(title: str, objective: str) -> str:
|
||||
return f"""# {title}
|
||||
|
||||
## Outcome
|
||||
|
||||
{objective or "TODO: State the concrete outcome in one or two sentences."}
|
||||
|
||||
## Context
|
||||
|
||||
- TODO: List the project facts, files, docs, user needs, and constraints Codex must know.
|
||||
|
||||
## Constraints
|
||||
|
||||
- TODO: List behavior, APIs, data, UX, performance, compatibility, or process rules that must not regress.
|
||||
|
||||
## Non-Goals
|
||||
|
||||
- TODO: List work that is out of scope for this goal.
|
||||
|
||||
## Ask Before
|
||||
|
||||
- TODO: List decisions, risky operations, external dependencies, product calls, and destructive changes that require user approval.
|
||||
|
||||
## Done Means
|
||||
|
||||
- TODO: Summarize the finish line. Detailed acceptance evidence belongs in `acceptance.md`.
|
||||
"""
|
||||
|
||||
|
||||
def plan(title: str) -> str:
|
||||
return f"""# Plan: {title}
|
||||
|
||||
## Solution Overview
|
||||
|
||||
TODO: Describe what is being built in plain language. Explain the shape of the solution before diving into tasks.
|
||||
|
||||
## Why This Approach
|
||||
|
||||
TODO: Explain why this direction is appropriate for the project, user goal, constraints, and risk level.
|
||||
|
||||
## How It Will Work
|
||||
|
||||
TODO: Describe the main moving parts, data flow, user flow, files, APIs, or systems involved. Keep this narrative enough that a reviewer can understand the intended solution.
|
||||
|
||||
## Slices
|
||||
|
||||
| Slice | Purpose | Main files or systems | Done when | Risks |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| 1 | TODO | TODO | TODO | TODO |
|
||||
| 2 | TODO | TODO | TODO | TODO |
|
||||
|
||||
## Sequencing
|
||||
|
||||
- TODO: Explain the order of execution and which slices block later slices.
|
||||
|
||||
## Phase Boundaries
|
||||
|
||||
- TODO: State when this goal should end and a new goal should be created instead of stretching this one.
|
||||
|
||||
## Steering Notes
|
||||
|
||||
- TODO: Capture taste calls, product preferences, or review checkpoints the user should steer during execution.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] TODO: Requirement with concrete observable evidence.
|
||||
- [ ] TODO: Requirement with concrete observable evidence.
|
||||
|
||||
## Required Evidence
|
||||
|
||||
| Requirement | Evidence to inspect | Where evidence is recorded |
|
||||
| --- | --- | --- |
|
||||
| TODO | TODO | TODO |
|
||||
|
||||
## Completion Audit
|
||||
|
||||
Before marking the goal complete, Codex must map every explicit requirement, file, command, check, and deliverable to real evidence. If any item is missing, incomplete, weakly verified, or uncertain, the goal is not complete.
|
||||
"""
|
||||
|
||||
|
||||
def verification(title: str) -> str:
|
||||
return f"""# Verification: {title}
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Purpose | Expected pass condition | Evidence location |
|
||||
| --- | --- | --- | --- |
|
||||
| TODO | TODO | TODO | TODO |
|
||||
|
||||
## Manual Checks
|
||||
|
||||
- TODO: Add browser checks, screenshots, release checks, PR checks, or human review steps.
|
||||
|
||||
## Evidence Rules
|
||||
|
||||
- Record verification results in `progress.jsonl`.
|
||||
- Include command, status, timestamp, and artifact path when available.
|
||||
- Do not rely on passing tests unless they cover the requirement being claimed.
|
||||
"""
|
||||
|
||||
|
||||
def blockers(title: str) -> str:
|
||||
return f"""# Blockers: {title}
|
||||
|
||||
## Open Questions
|
||||
|
||||
- TODO: Questions that must be answered before or during execution.
|
||||
|
||||
## Stop And Ask
|
||||
|
||||
- TODO: Conditions that should pause the goal and ask the user.
|
||||
|
||||
## Dangerous Or High-Risk Actions
|
||||
|
||||
- TODO: Destructive changes, migrations, dependency changes, security-sensitive work, billing/auth changes, or external operations requiring approval.
|
||||
|
||||
## Known Blockers
|
||||
|
||||
- TODO: Current blockers, owners, and next action.
|
||||
"""
|
||||
|
||||
|
||||
def goal_prompt(slug: str, title: str, objective: str) -> str:
|
||||
prompt_objective = objective or f"Complete the reviewed goal package for {title}."
|
||||
return f"""# Codex Goal Prompt: {title}
|
||||
|
||||
After every critical document in this folder is approved with Plannotator, paste or set this goal:
|
||||
|
||||
```text
|
||||
/goal {prompt_objective}
|
||||
|
||||
Use `goals/{slug}/` as the durable source of truth:
|
||||
- Read `brief.md` for the mission, context, constraints, non-goals, and ask-before rules.
|
||||
- Follow `plan.md` for the solution overview, implementation slices, risks, and acceptance criteria.
|
||||
- Run the checks in `verification.md` and record evidence.
|
||||
- Append concrete progress and proof to `progress.jsonl`.
|
||||
- Pause and ask the user for anything listed in `blockers.md` or any similarly risky unresolved decision.
|
||||
|
||||
Do not mark the goal complete until every acceptance item is backed by real evidence and the required verification has passed or the remaining blocker is explicitly documented for the user.
|
||||
```
|
||||
"""
|
||||
|
||||
|
||||
def progress_entry(title: str, objective: str) -> str:
|
||||
now = dt.datetime.now(dt.timezone.utc).replace(microsecond=0).isoformat()
|
||||
entry = {
|
||||
"type": "goal_package_created",
|
||||
"timestamp": now,
|
||||
"title": title,
|
||||
"objective": objective,
|
||||
"evidence": "Initial scaffold created; critical documents still require Plannotator gate approval.",
|
||||
}
|
||||
return json.dumps(entry, ensure_ascii=True) + "\n"
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
parser.add_argument("--root", default=".", help="Project root where goals/ should be created.")
|
||||
parser.add_argument("--slug", help="Goal folder name. Defaults to a slugified title.")
|
||||
parser.add_argument("--title", required=True, help="Human-readable goal title.")
|
||||
parser.add_argument("--objective", default="", help="One-sentence goal outcome.")
|
||||
parser.add_argument("--force", action="store_true", help="Overwrite existing scaffold files.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = parse_args()
|
||||
root = Path(args.root).resolve()
|
||||
slug = slugify(args.slug or args.title)
|
||||
goal_dir = root / "goals" / slug
|
||||
goal_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
files = {
|
||||
"brief.md": brief(args.title, args.objective),
|
||||
"plan.md": plan(args.title),
|
||||
"verification.md": verification(args.title),
|
||||
"blockers.md": blockers(args.title),
|
||||
"goal-prompt.md": goal_prompt(slug, args.title, args.objective),
|
||||
}
|
||||
for name, content in files.items():
|
||||
write_file(goal_dir / name, content, args.force)
|
||||
|
||||
progress_path = goal_dir / "progress.jsonl"
|
||||
if not progress_path.exists() or args.force:
|
||||
write_file(progress_path, progress_entry(args.title, args.objective), args.force)
|
||||
|
||||
print(goal_dir)
|
||||
for name in sorted([*files.keys(), "progress.jsonl"]):
|
||||
print(goal_dir / name)
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
88
extensions/plannotator/tool-scope.test.ts
Normal file
88
extensions/plannotator/tool-scope.test.ts
Normal file
@@ -0,0 +1,88 @@
|
||||
import { describe, expect, test } from "bun:test";
|
||||
import {
|
||||
getToolsForPhase,
|
||||
isPlanWritePathAllowed,
|
||||
PLAN_SUBMIT_TOOL,
|
||||
stripPlanningOnlyTools,
|
||||
} from "./tool-scope";
|
||||
|
||||
describe("pi plan tool scoping", () => {
|
||||
test("planning phase adds the submit tool and discovery helpers", () => {
|
||||
expect(getToolsForPhase(["read", "bash", "edit", "write"], "planning")).toEqual([
|
||||
"read",
|
||||
"bash",
|
||||
"edit",
|
||||
"write",
|
||||
"grep",
|
||||
"find",
|
||||
"ls",
|
||||
PLAN_SUBMIT_TOOL,
|
||||
]);
|
||||
});
|
||||
|
||||
test("idle and executing phases strip the planning-only submit tool", () => {
|
||||
const leakedTools = ["read", "bash", "grep", PLAN_SUBMIT_TOOL, "write"];
|
||||
|
||||
expect(getToolsForPhase(leakedTools, "idle")).toEqual([
|
||||
"read",
|
||||
"bash",
|
||||
"grep",
|
||||
"write",
|
||||
]);
|
||||
expect(getToolsForPhase(leakedTools, "executing")).toEqual([
|
||||
"read",
|
||||
"bash",
|
||||
"grep",
|
||||
"write",
|
||||
]);
|
||||
});
|
||||
|
||||
test("stripping planning-only tools preserves unrelated tools", () => {
|
||||
expect(stripPlanningOnlyTools([PLAN_SUBMIT_TOOL, "todo", "question", "read"])).toEqual([
|
||||
"todo",
|
||||
"question",
|
||||
"read",
|
||||
]);
|
||||
});
|
||||
});
|
||||
|
||||
describe("plan write path gate", () => {
|
||||
const cwd = "/r";
|
||||
|
||||
test("allows markdown files anywhere inside cwd", () => {
|
||||
expect(isPlanWritePathAllowed("PLAN.md", cwd)).toBe(true);
|
||||
expect(isPlanWritePathAllowed("plans/auth.md", cwd)).toBe(true);
|
||||
expect(isPlanWritePathAllowed("deeply/nested/dir/notes.mdx", cwd)).toBe(true);
|
||||
});
|
||||
|
||||
test("rejects non-markdown extensions", () => {
|
||||
expect(isPlanWritePathAllowed("src/app.ts", cwd)).toBe(false);
|
||||
expect(isPlanWritePathAllowed("notes.txt", cwd)).toBe(false);
|
||||
expect(isPlanWritePathAllowed("config.json", cwd)).toBe(false);
|
||||
});
|
||||
|
||||
test("rejects files with no extension or bare directories", () => {
|
||||
expect(isPlanWritePathAllowed("plans", cwd)).toBe(false);
|
||||
expect(isPlanWritePathAllowed("PLAN", cwd)).toBe(false);
|
||||
});
|
||||
|
||||
test("rejects traversal and absolute paths outside cwd", () => {
|
||||
expect(isPlanWritePathAllowed("../escape.md", cwd)).toBe(false);
|
||||
expect(isPlanWritePathAllowed("../../etc/passwd.md", cwd)).toBe(false);
|
||||
expect(isPlanWritePathAllowed("/tmp/leak.md", cwd)).toBe(false);
|
||||
});
|
||||
|
||||
test("allows absolute paths that resolve inside cwd", () => {
|
||||
expect(isPlanWritePathAllowed("/r/plans/foo.md", cwd)).toBe(true);
|
||||
});
|
||||
|
||||
test("rejects empty path and the cwd itself", () => {
|
||||
expect(isPlanWritePathAllowed("", cwd)).toBe(false);
|
||||
expect(isPlanWritePathAllowed(".", cwd)).toBe(false);
|
||||
});
|
||||
|
||||
test("extension check is case-insensitive", () => {
|
||||
expect(isPlanWritePathAllowed("PLAN.MD", cwd)).toBe(true);
|
||||
expect(isPlanWritePathAllowed("notes.MdX", cwd)).toBe(true);
|
||||
});
|
||||
});
|
||||
39
extensions/plannotator/tool-scope.ts
Normal file
39
extensions/plannotator/tool-scope.ts
Normal file
@@ -0,0 +1,39 @@
|
||||
import { extname, isAbsolute, relative, resolve } from "node:path";
|
||||
|
||||
export type Phase = "idle" | "planning" | "executing";
|
||||
|
||||
export const PLAN_SUBMIT_TOOL = "plannotator_submit_plan";
|
||||
export const PLANNING_DISCOVERY_TOOLS = ["grep", "find", "ls"] as const;
|
||||
|
||||
const PLANNING_ONLY_TOOLS = new Set<string>([PLAN_SUBMIT_TOOL]);
|
||||
const ALLOWED_PLAN_EXTENSIONS = new Set<string>([".md", ".mdx"]);
|
||||
|
||||
export function stripPlanningOnlyTools(tools: readonly string[]): string[] {
|
||||
return tools.filter((tool) => !PLANNING_ONLY_TOOLS.has(tool));
|
||||
}
|
||||
|
||||
export function getToolsForPhase(
|
||||
baseTools: readonly string[],
|
||||
phase: Phase,
|
||||
): string[] {
|
||||
const tools = stripPlanningOnlyTools(baseTools);
|
||||
if (phase !== "planning") {
|
||||
return [...new Set(tools)];
|
||||
}
|
||||
|
||||
return [
|
||||
...new Set([...tools, ...PLANNING_DISCOVERY_TOOLS, PLAN_SUBMIT_TOOL]),
|
||||
];
|
||||
}
|
||||
|
||||
// Used by both the planning-phase write gate and plannotator_submit_plan.
|
||||
// Path must resolve inside cwd (no traversal, no absolute escape) and end
|
||||
// in a permitted markdown extension.
|
||||
export function isPlanWritePathAllowed(inputPath: string, cwd: string): boolean {
|
||||
if (!inputPath) return false;
|
||||
const targetAbs = resolve(cwd, inputPath);
|
||||
const rel = relative(resolve(cwd), targetAbs);
|
||||
if (rel === "" || rel.startsWith("..") || isAbsolute(rel)) return false;
|
||||
const ext = extname(targetAbs).toLowerCase();
|
||||
return ALLOWED_PLAN_EXTENSIONS.has(ext);
|
||||
}
|
||||
16
extensions/plannotator/tsconfig.json
Normal file
16
extensions/plannotator/tsconfig.json
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2022",
|
||||
"module": "ESNext",
|
||||
"moduleResolution": "bundler",
|
||||
"strict": true,
|
||||
"skipLibCheck": true,
|
||||
"noEmit": true,
|
||||
"allowImportingTsExtensions": true,
|
||||
"isolatedModules": true,
|
||||
"moduleDetection": "force",
|
||||
"types": ["node"]
|
||||
},
|
||||
"include": ["*.ts", "server/**/*.ts"],
|
||||
"exclude": ["**/*.test.ts"]
|
||||
}
|
||||
43
extensions/plannotator/vendor.sh
Executable file
43
extensions/plannotator/vendor.sh
Executable file
@@ -0,0 +1,43 @@
|
||||
#!/usr/bin/env bash
|
||||
# Vendor shared modules into generated/ for Pi extension.
|
||||
# Single source of truth — used by both `npm run build` and CI test workflow.
|
||||
set -euo pipefail
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
mkdir -p generated generated/ai/providers
|
||||
|
||||
for f in feedback-templates prompts review-core storage draft project pr-provider pr-stack pr-github pr-gitlab checklist integrations-common repo reference-common favicon code-file resolve-file config external-annotation agent-jobs worktree worktree-pool html-to-markdown url-to-markdown tour annotate-args at-reference; do
|
||||
src="../../packages/shared/$f.ts"
|
||||
printf '// @generated — DO NOT EDIT. Source: packages/shared/%s.ts\n' "$f" | cat - "$src" > "generated/$f.ts"
|
||||
done
|
||||
|
||||
# Vendor review agent modules from packages/server/ — rewrite imports for generated/ layout
|
||||
for f in codex-review claude-review path-utils; do
|
||||
src="../../packages/server/$f.ts"
|
||||
printf '// @generated — DO NOT EDIT. Source: packages/server/%s.ts\n' "$f" | cat - "$src" \
|
||||
| sed 's|from "./vcs"|from "./review-core.js"|' \
|
||||
| sed 's|from "./pr"|from "./pr-provider.js"|' \
|
||||
| sed 's|from "./path-utils"|from "./path-utils.js"|' \
|
||||
> "generated/$f.ts"
|
||||
done
|
||||
|
||||
# tour-review lives in packages/server/tour/ — parent-relative imports and the
|
||||
# shared tour types package each map to the flat generated/ layout.
|
||||
for f in tour-review; do
|
||||
src="../../packages/server/tour/$f.ts"
|
||||
printf '// @generated — DO NOT EDIT. Source: packages/server/tour/%s.ts\n' "$f" | cat - "$src" \
|
||||
| sed 's|from "\.\./vcs"|from "./review-core.js"|' \
|
||||
| sed 's|from "\.\./pr"|from "./pr-provider.js"|' \
|
||||
| sed 's|from "@plannotator/shared/tour"|from "./tour.js"|' \
|
||||
> "generated/$f.ts"
|
||||
done
|
||||
|
||||
for f in index types provider session-manager endpoints context base-session; do
|
||||
src="../../packages/ai/$f.ts"
|
||||
printf '// @generated — DO NOT EDIT. Source: packages/ai/%s.ts\n' "$f" | cat - "$src" > "generated/ai/$f.ts"
|
||||
done
|
||||
|
||||
for f in claude-agent-sdk codex-sdk opencode-sdk pi-sdk pi-sdk-node pi-events; do
|
||||
src="../../packages/ai/providers/$f.ts"
|
||||
printf '// @generated — DO NOT EDIT. Source: packages/ai/providers/%s.ts\n' "$f" | cat - "$src" > "generated/ai/providers/$f.ts"
|
||||
done
|
||||
Reference in New Issue
Block a user