Add 5 pi extensions: pi-subagents, pi-crew, rpiv-pi, pi-interactive-shell, pi-intercom

This commit is contained in:
2026-05-08 15:59:25 +10:00
parent d0d1d9b045
commit 31b4110c87
457 changed files with 85157 additions and 0 deletions

View File

@@ -0,0 +1,324 @@
---
name: write-test-cases
description: Generate manual test-case specifications for a single feature by analyzing the implementing code in parallel, producing flow-based test cases plus a regression suite and project-wide coverage map under .rpiv/test-cases/{feature}/. Consumes an outline-test-cases _meta.md when available for warm-start. Use when the user wants test cases written for a specific feature, asks for QA specs, or has run outline-test-cases and is ready to flesh out a feature.
argument-hint: "[feature name, component path, feature slug, or _meta.md path] [additional instructions]"
---
# Write Test Cases
You are tasked with generating manual test case specifications for a single feature by analyzing code in parallel and producing flow-based test case documents for QA teams.
## Initial Setup
When this command is invoked, respond with:
```
I'll generate test cases for this feature. Let me discover the relevant code and analyze it.
```
Then proceed to Step 1.
## Steps
### Step 1: Determine Feature Scope
Parse the user's input to determine the feature under test. Handle these input forms:
1. **_meta.md path** (e.g., `.rpiv/test-cases/users/_meta.md`):
- Read the file. Extract `feature` from frontmatter. Mark as **has _meta.md**.
2. **Feature folder or slug** (e.g., `.rpiv/test-cases/order-management/` or `order-management`):
- Check if `.rpiv/test-cases/{input}/_meta.md` exists
- If yes: read it, extract `feature`, mark as **has _meta.md**
- If no: treat as feature name
3. **Source code path** (e.g., `src/orders/` or `src/api/controllers/OrdersController.ts`):
- Use the path directly as the starting point for analysis
4. **Feature name with optional instructions** (e.g., `Order Management focus on refund edge cases`):
- Parse as `{feature identifier} [additional instructions]`
- Check if `.rpiv/test-cases/{slugified-name}/_meta.md` exists — if yes, read it and mark as **has _meta.md**
- Store additional instructions as supplemental context for agent prompts and checkpoint
5. **No arguments provided**:
```
I'll help you generate test cases. Please provide either:
1. A feature name: `/skill:write-test-cases Order Management`
2. A component path: `/skill:write-test-cases src/orders/`
3. A feature slug: `/skill:write-test-cases order-management`
4. A _meta.md path: `/skill:write-test-cases .rpiv/test-cases/orders/_meta.md`
Add instructions after the feature: `/skill:write-test-cases Order Management focus on refund edge cases`
```
Then wait for input.
#### Warm-Start from _meta.md
When `_meta.md` is available, read it FULLY and extract:
- **Identity**: `feature`, `module`, `portal`, `slug` from frontmatter
- **Routes**: from `## Routes` section — route paths and component names
- **Endpoints**: from `## Endpoints` section — HTTP methods and paths
- **Scope decisions**: from `## Scope Decisions` section — in/out of scope items
- **Domain context**: from `## Domain Context` section — business rules and intentional behaviors
- **Checkpoint history**: from `## Checkpoint History` section — prior Q&A pairs
Report:
```
[Warm-start]: Found _meta.md for "{feature}" ({module}, {portal}). {N} routes, {M} endpoints.
```
When no _meta.md, detect the project's technology stack before spawning agents: check `package.json` for framework indicators (see Framework Detection Reference at end of document). If no `package.json`, check for `.csproj`/`.sln` (.NET), `pyproject.toml`/`requirements.txt` (Python). Use the detected framework to adapt Agent A's prompt in Step 2.
### Step 2: Discover Feature Code (parallel agents)
Spawn the following agents in parallel using the Agent tool. Wait for ALL agents to complete before proceeding.
**Agent A — Web Layer Discovery:**
- subagent_type: `codebase-locator`
- When _meta.md is available: "Validate these known Web Layer entry points for {feature name}: {routes and endpoints from _meta.md}. Check if they still exist and find any NEW entry points not in this list. Report: confirmed (still exists), removed (no longer found), new (not in the list)."
- When no _meta.md: "Find all Web Layer entry points for the {feature name} feature{framework_hint}. Look for: controllers, route definitions, page components, form handlers, API endpoints. Search across all web layers (API, Admin, Customer Portal, Host, etc.). Also find frontend service files, HTTP clients, or API call sites that reference these endpoints — report which frontend pages call which backend URLs. For each entry point found, report: file path, HTTP method/route or page path, and a one-line description of what it does. Group by web layer."
{framework_hint} is " in this {Framework} project" when a framework is detected (e.g., " in this Angular project"), or empty string if none detected. See Framework Detection Reference at end of document.
**Agent B — Existing Test Cases:**
- subagent_type: `test-case-locator`
- Prompt: "Search for existing test cases related to {feature name} in .rpiv/test-cases/. Report any existing TCs with their IDs, titles, and priorities so we can avoid duplicates."
Wait for both agents to complete before proceeding.
### Step 3: Analyze Feature Code (parallel agents)
Using the entry points discovered in Step 2 (validated against _meta.md when available), spawn analysis agents in parallel. When _meta.md is available, enrich prompts: append scope exclusions from `## Scope Decisions` as {scope_context}, domain rules from `## Domain Context` as {domain_context}, and endpoint list as {endpoint_scope}. When no _meta.md, omit these.
**Agent C — Code Analysis:**
- subagent_type: `codebase-analyzer`
- Prompt: "Analyze the {feature name} feature implementation in detail. Read the controllers/route handlers at {discovered paths}. For each endpoint/action, determine: 1) What user input is accepted (request body, query params, form fields)? 2) What validation rules exist — report specific limits (max lengths, regex patterns, required vs optional)? 3) What business logic is executed? 4) What are the success/error responses? 5) What authorization/permissions are required? Focus on understanding USER FLOWS — sequences of actions a user would perform to accomplish a goal. ALSO read the frontend page components and templates at {discovered frontend paths}. Extract what a QA tester would actually see: exact button labels, form field labels/placeholders, navigation items, table column headers, success/error messages, and conditional UI (role- or state-dependent elements). Resolve any i18n translation keys to displayed text. Report UI elements per page/route alongside the backend analysis.{scope_context}{domain_context}"
**Agent D — Postcondition Discovery:**
- subagent_type: `integration-scanner`
- Prompt: "Find all side effects triggered by {feature name} actions{endpoint_scope}. Look for: domain events published, message handlers invoked, email/notification triggers, external API calls, database cascades, cache invalidations, audit log entries, webhook dispatches. For each side effect, report: what triggers it (which action/endpoint) and where the handler code lives (file:line). Do NOT describe what the handler does — only locate it. These locations become postconditions in test cases.{scope_context}"
Wait for ALL agents to complete before proceeding.
### Step 4: Synthesize Findings
Compile all agent results into a feature analysis:
1. **Map user flows** — Group the discovered endpoints/pages into logical user journeys:
- Identify the natural sequence of actions (e.g., browse -> select -> configure -> checkout -> confirm)
- Each flow should represent a complete user goal, not isolated actions
- A feature typically produces 3-8 flows depending on complexity
- **When to separate**: If view and edit serve different user goals, keep them as separate flows. If a sub-operation (e.g., replace, export, bulk action) has its own trigger and confirmation, it deserves its own flow. If different user roles interact with the same entity differently, split by role.
- **Use real UI element names** from Agent C's frontend analysis — actual button labels, form field names, navigation text, displayed messages. Do not infer UI element names from backend action semantics.
2. **Enrich with postconditions** — For each flow, attach the side effects discovered by the integration-scanner:
- Map domain events to specific flow steps
- Include cross-system effects (emails, webhooks, inventory changes)
3. **Check for duplicates** — Cross-reference synthesized flows against existing TCs from test-case-locator:
- If an existing TC covers a flow, note it and skip that flow
- If partial overlap, note the gap to fill
4. **Assign priorities**:
- **high**: Core happy path, payment/money flows, data integrity, security-critical
- **medium**: Alternative paths, common edge cases, permission boundaries
- **low**: Rare edge cases, cosmetic validation, error message wording
5. **Determine test case IDs**:
- Module abbreviation: from _meta.md `module` field, or derive from feature name (e.g., Order Management -> ORD)
- Numbering: start at 001, or continue from highest existing TC ID if duplicates found
- Format: `TC-{MODULE}-{NNN}`
**Do NOT write test cases yet** — proceed to the developer checkpoint first.
### Step 5: Developer Checkpoint
Present a flow summary, then ask grounded questions one at a time.
**Flow summary** (under 20 lines):
```
## Feature: {Feature Name}
Entry points: {N} endpoints across {M} web layers
Postconditions: {K} side effects discovered
Existing TCs: {X} found (will skip duplicates)
### Proposed Test Cases:
1. TC-{MOD}-001: {Flow title} (priority: high)
Steps: {brief flow summary — e.g., "browse -> add to cart -> checkout -> payment -> confirm"}
2. TC-{MOD}-002: {Flow title} (priority: medium)
Steps: {brief flow summary}
{etc.}
Flows skipped (already covered): {list or "none"}
```
When _meta.md is available, prepend:
```
### Prior Scope Decisions (from outline):
- {decision 1}
- {decision 2}
These are carried forward. I'll only ask about new findings.
```
Then ask grounded questions — **one at a time**. Use a **❓ Question:** prefix so the developer knows their input is needed. Each question must reference real findings with file:line evidence and pull NEW information from the developer. Focus on:
- Missing flows the code analysis couldn't detect (e.g., "I found create/update/delete flows but no bulk import — is that a feature?")
- Postconditions the integration-scanner might have missed (e.g., "No webhook found for order status changes — is there an external notification I'm not seeing?")
- Priority overrides (e.g., "I marked refund flow as medium — should it be high given payment implications?")
- User roles and permissions that affect test preconditions
- Test data requirements not obvious from code
When _meta.md is available: skip questions already answered in `## Checkpoint History`. Only ask about new findings not covered by prior decisions.
**CRITICAL**: Ask ONE question at a time. Wait for the answer before asking the next. Lead with your most significant finding.
**Choosing question format:**
- **`ask_user_question` tool** — when your question has 2-4 concrete options from code analysis (pattern conflicts, integration choices, scope boundaries, priority overrides). The user can always pick "Other" for free-text. Example: Use the `ask_user_question` tool with the question "Found 2 mapping approaches — which should new code follow?". Options: "Manual mapping (Recommended)" (Used in OrderService (src/services/OrderService.ts:45) — 8 occurrences); "AutoMapper" (Used in UserService (src/services/UserService.ts:12) — 2 occurrences).
- **Free-text with ❓ Question: prefix** — when the question is open-ended and options can't be predicted (discovery, "what am I missing?", corrections). Example:
"❓ Question: Integration scanner found no background job registration for this area. Is that expected, or is there async processing I'm not seeing?"
**Batching**: When you have 2-4 independent questions (answers don't depend on each other), you MAY batch them in a single `ask_user_question` call. Keep dependent questions sequential.
**Classify each response:**
**Corrections** (e.g., "that flow doesn't exist", "wrong priority"):
- Update flow list. Record in notes.
**Missing flows** (e.g., "you missed the bulk export feature"):
- Spawn targeted **codebase-analyzer** (max 1 agent) to analyze the missing area.
- Add the flow to the list.
**Scope adjustments** (e.g., "skip admin flows, focus on customer portal"):
- Remove out-of-scope flows. Record the adjustment.
**Confirmations** (e.g., "looks good", "yes proceed"):
- Proceed to Step 6.
### Step 6: Generate Test Case Documents
Read the templates before writing:
- Read the full test case template at `templates/test-case.md`
- Read the full regression suite template at `templates/regression-suite.md`
See `examples/order-placement-flow.md` (e-commerce order flow), `examples/customer-auth-flow.md` (authentication flow), and `examples/team-management-flow.md` (SaaS team management flow) for well-formed test case examples.
What makes these examples good:
- **Steps are user-centric** — "Navigate to...", "Click...", "Enter..." — not technical ("POST to /api/orders")
- **Expected results are observable** — what the user SEES, not internal state changes
- **Postconditions verify side effects** — email sent, inventory updated, audit logged
- **Edge cases are separate bullets** — not crammed into steps
- **Preconditions are specific** — exact user role, required test data, system state
See `examples/order-management-suite.md` and `examples/team-management-suite.md` for well-formed regression suite examples.
What makes these examples good:
- **Smoke subset is minimal** — 2-4 high-priority TCs covering critical paths
- **Priority ordering** — high -> medium -> low within the full regression table
- **Coverage map** cross-references TCs against feature sub-areas
- **Gaps section** flags known uncovered areas for future work
**For each confirmed flow**, generate a test case document:
- Follow the test-case.md template exactly
- Write user-facing actions in Steps (what they click/type/navigate), not API calls
- Use actual UI element names discovered by Agent C (button labels, form fields, navigation items, messages) — do NOT fabricate element names from backend semantics. If Agent C didn't find a specific label, describe the element generically (e.g., "submit button" not "Click 'Save Changes'")
- Expected results describe what the user observes (success message, redirect, updated list)
- Postconditions describe system-level side effects (from integration-scanner findings)
- Edge cases list variant scenarios worth separate testing
- Include preconditions: user role, required test data, system state
- Include `commit` in frontmatter with current git commit hash
**After all TCs**, generate the regression suite document:
- Follow the regression-suite.md template
- List all TCs with priority ordering (high -> medium -> low)
- Mark smoke test subset (TCs that cover critical paths in minimal time)
- Include coverage map cross-referencing TCs to feature sub-areas
- Calculate total estimated execution time
- Include `commit` in overview with current commit hash
### Step 7: Write Files & Update Artifacts
1. **Determine output directory**:
- Target: `.rpiv/test-cases/{feature-slug}/` in the current working directory
- Feature slug: from _meta.md (when available) or kebab-case from feature name
- Create the directory if it doesn't exist
2. **Write all files at once** using the Write tool:
- Individual TC files: `TC-{MOD}-{NNN}_{flow-slug}.md`
- Regression suite: `_regression-suite.md` (underscore prefix sorts it first)
- Do NOT ask for confirmation before each file — batch mode
3. **Update _meta.md** (when it exists):
- Set `tc_count` to the number of TCs written
- Set `status` to `generated`
- Update `date` to current date
- Append new checkpoint Q&A pairs to `## Checkpoint History` under a new date header — only if new Q&A occurred during Step 5
4. **Rebuild root coverage map** at `.rpiv/test-cases/_coverage-map.md`:
- Read the coverage map template at `templates/coverage-map.md`
- Glob for all `_regression-suite.md` files across `.rpiv/test-cases/*/`
- Glob for all `_meta.md` files across `.rpiv/test-cases/*/`
- Read each file's key data (frontmatter, summary stats, coverage map, smoke subset)
- Aggregate into the coverage map template
- Write the file (if only one feature exists, the map shows just that feature — it grows over time)
5. **Present summary**:
```
## Test Cases Written
| File | Priority | Flow |
|------|----------|------|
| TC-ORD-001_place-order.md | high | Place and confirm order |
| TC-ORD-002_cancel-order.md | medium | Cancel order before fulfillment |
| _regression-suite.md | — | Feature summary (N TCs, ~Xm execution) |
| _coverage-map.md | — | Project-wide coverage (N features, M TCs) |
Output: `.rpiv/test-cases/{feature-slug}/`
Total: {N} test cases + 1 regression suite + 1 coverage map
Review the generated test cases and let me know if you'd like adjustments.
```
### Step 8: Handle Follow-ups
- **Append, never rewrite.** Edit specific TC files directly; preserve TC IDs (continue numbering from the highest existing ID when adding).
- **Re-dispatch narrowly.** Spawn one targeted `codebase-analyzer` for missing flows. Do NOT re-run the full skill.
- **Regenerate suites on any TC change.** Always regenerate `_regression-suite.md` and `_coverage-map.md` to keep them in sync.
- **When to re-invoke instead.** Re-run `/skill:write-test-cases <feature>` for a different feature; for the same feature, prefer in-place edits. The previous block's `Next step:` stays valid.
Skill-specific verbs:
- **Add missing flows**: spawn targeted `codebase-analyzer`, generate new TCs, regenerate suites.
- **Adjust priorities**: edit TC frontmatter, regenerate suites.
- **Modify steps**: edit specific TC files directly.
- **Delete TCs**: remove the file, regenerate suites.
## Framework Detection Reference
| Indicator | Framework | Detection |
|-----------|-----------|-----------|
| `@angular/core` | Angular | `package.json` dependencies |
| `react-router-dom` / `react-router` / `@react-router` | React | `package.json` dependencies |
| `next` | Next.js | `package.json` dependencies |
| `vue-router` | Vue Router | `package.json` dependencies |
| `nuxt` | Nuxt | `package.json` dependencies |
| `.csproj` / `.sln` | .NET | File presence in project root |
| `pyproject.toml` / `requirements.txt` with Django/Flask/FastAPI | Python | File presence + dependency check |
| None found | Backend-only | Fallback — omit framework hint |
## Important Notes
- **Manual test cases for QA teams** — NOT automated test code. Write in natural language from the user's perspective.
- **Flow-level granularity** — each TC covers a complete user journey, not a single endpoint.
- **Postconditions are critical** — side effects from domain events are what distinguish a thorough TC from a superficial one.
- **Never skip the developer checkpoint** — QA domain knowledge (which flows matter most, what edge cases exist in production) is the highest-value signal.
- **_meta.md is warm start, not truth** — always validate against live code via Agent A, even with _meta.md available.
- **File reading**: Always read templates FULLY (no limit/offset) before generating test cases.
- **Critical ordering**: Follow the numbered steps exactly.
- ALWAYS wait for discovery agents (Step 2) before spawning analysis agents (Step 3)
- ALWAYS wait for ALL agents to complete before synthesizing (Step 4)
- ALWAYS resolve all checkpoint questions (Step 5) before generating TCs (Step 6)
- ALWAYS regenerate regression suite and coverage map after any TC writes (Step 7)
- NEVER write test case files with placeholder values
- **Duplicate avoidance**: Always check existing TCs via test-case-locator before generating new ones.
- **ID continuity**: If existing TCs exist for this module, continue numbering from the highest existing ID.

View File

@@ -0,0 +1,50 @@
---
id: TC-AUTH-001
title: "Customer magic-link login"
feature: "Customer Authentication"
priority: high
type: functional
status: draft
tags: ["auth", "login", "magic-link", "customer-portal", "happy-path"]
commit: abc1234
---
# Customer magic-link login
## Objective
Verify that a customer can request a magic-link login email, click the link, and be authenticated into the Customer Portal with the correct session and permissions.
## Preconditions
- Customer account exists with email "test@example.com"
- Email delivery service is configured in test mode
- Customer is NOT currently logged in
- No active sessions exist for this customer
## Steps
| # | Action | Expected Result |
|---|--------|-----------------|
| 1 | Navigate to Customer Portal login page | Login form displays with email field and "Send Magic Link" button |
| 2 | Enter "test@example.com" in email field | Email field validates format, no error shown |
| 3 | Click "Send Magic Link" | Success message: "Check your email for a login link". Button disabled for 60s |
| 4 | Open email inbox and find magic-link email | Email received within 2 minutes with one-time login URL |
| 5 | Click the magic-link URL in the email | Browser opens, brief loading state, redirects to Customer Portal dashboard |
| 6 | Verify dashboard displays correctly | Customer name in header, recent orders listed, subscription status visible |
| 7 | Refresh the page | Session persists — dashboard still shows, not redirected to login |
## Postconditions
- Session token created and stored (verify via browser cookies/localStorage)
- Login event recorded in audit log with timestamp, IP address, and auth method "magic-link"
- Magic link marked as used — clicking same link again shows "Link expired" page
- Last login timestamp updated on customer record
## Edge Cases
- Expired magic link (>15 minutes old) — verify "Link expired, request a new one" message
- Already-used magic link — verify "Link already used" message
- Non-existent email address — verify same success message shown (no email enumeration)
- Multiple magic links requested — verify only the most recent link works
- Magic link opened in different browser/device — verify it still works
## Notes
- Related TCs: TC-AUTH-002 (logout), TC-AUTH-003 (session expiry)
- Dependencies: Email delivery service in test mode, ability to inspect test emails
- Known issues: Magic link emails may be delayed up to 2 minutes in test environments

View File

@@ -0,0 +1,57 @@
# Order Management — Regression Suite
## Overview
- Feature: Order Management
- Module: ORD
- Total test cases: 6
- Estimated execution: ~35 minutes
- Last generated: 2026-03-31
- Commit: abc1234
## Smoke Test Subset
| Priority | TC ID | Title | Est. Time |
|----------|-------|-------|-----------|
| high | TC-ORD-001 | Place order with physical products | ~5m |
| high | TC-ORD-004 | Process full refund | ~5m |
**Smoke total: ~10 minutes**
## Full Regression
### High Priority
| TC ID | Title | Type | Est. Time |
|-------|-------|------|-----------|
| TC-ORD-001 | Place order with physical products | functional | ~5m |
| TC-ORD-003 | Fulfill order and trigger shipping | functional | ~8m |
| TC-ORD-004 | Process full refund | functional | ~5m |
### Medium Priority
| TC ID | Title | Type | Est. Time |
|-------|-------|------|-----------|
| TC-ORD-002 | Cancel order before fulfillment | functional | ~5m |
| TC-ORD-005 | Admin edits order line items | functional | ~5m |
### Low Priority
| TC ID | Title | Type | Est. Time |
|-------|-------|------|-----------|
| TC-ORD-006 | Filter and search order list | regression | ~3m |
**Full regression total: ~31 minutes**
## Coverage Map
| Area | TCs Covering |
|------|-------------|
| Order Creation | TC-ORD-001 |
| Order Cancellation | TC-ORD-002 |
| Fulfillment | TC-ORD-003 |
| Refunds | TC-ORD-004 |
| Order Editing | TC-ORD-005 |
| Order Listing/Search | TC-ORD-006 |
| Payment Processing | TC-ORD-001, TC-ORD-004 |
| Email Notifications | TC-ORD-001, TC-ORD-003, TC-ORD-004 |
| Inventory Updates | TC-ORD-001, TC-ORD-003, TC-ORD-004 |
## Gaps
- Bulk order import — no TC generated, feature not yet implemented
- Partial refund flow — deferred, pending UX design for line-item selection
- Order export to CSV — low priority, cosmetic feature

View File

@@ -0,0 +1,54 @@
---
id: TC-ORD-001
title: "Place order with physical products"
feature: "Order Management"
priority: high
type: functional
status: draft
tags: ["orders", "checkout", "payment", "happy-path"]
commit: abc1234
---
# Place order with physical products
## Objective
Verify that a customer can browse products, add them to cart, complete checkout with a valid credit card, and receive an order confirmation. This is the primary revenue-generating flow.
## Preconditions
- Customer account exists with verified email
- At least 2 physical products are published with available inventory
- Stripe test mode is configured with valid API keys
- Customer is logged into the Customer Portal
## Steps
| # | Action | Expected Result |
|---|--------|-----------------|
| 1 | Navigate to product catalog page | Product listing displays with prices and availability |
| 2 | Click "Add to Cart" on first product | Cart badge updates to show 1 item, toast confirms addition |
| 3 | Click "Add to Cart" on second product | Cart badge updates to 2 items |
| 4 | Click cart icon in navigation header | Cart drawer slides open showing both products with quantities and subtotal |
| 5 | Click "Proceed to Checkout" | Checkout page loads with shipping address form |
| 6 | Enter valid shipping address and select shipping method | Shipping cost calculates and order total updates |
| 7 | Enter valid test credit card (4242 4242 4242 4242) | Card field shows validated state with card brand icon |
| 8 | Click "Place Order" | Loading spinner appears, then redirects to order confirmation page |
| 9 | Verify order confirmation page | Order number displayed, line items match cart, total matches checkout |
## Postconditions
- Order record created in database with status "pending_fulfillment"
- Order confirmation email sent to customer's email address
- Inventory quantity decremented for both purchased products
- Payment charge captured in Stripe (verify in Stripe dashboard)
- Webhook dispatched to fulfillment service with order details
- Audit log entry created with action "order.created" and customer ID
## Edge Cases
- Order with quantity > 1 of same product — verify inventory deducts correct amount
- Order with product at maximum inventory — verify "last item" handling
- Payment gateway timeout — verify order is not created, customer sees retry option
- Browser back button during payment processing — verify no duplicate charges
- Coupon code applied at checkout — verify discount reflected in total and payment
## Notes
- Related TCs: TC-ORD-002 (cancel order), TC-ORD-003 (refund order)
- Dependencies: Stripe test environment, fulfillment webhook endpoint
- Known issues: Intermittent Stripe webhook delay (up to 30s) may affect postcondition verification

View File

@@ -0,0 +1,56 @@
---
id: TC-TEAM-001
title: "Invite and onboard new team member"
feature: "Team Management"
priority: high
type: functional
status: draft
tags: ["team", "invitation", "onboarding", "roles", "happy-path"]
commit: abc1234
---
# Invite and onboard new team member
## Objective
Verify that a workspace admin can invite a new team member by email, the invitee receives an invitation, and upon accepting they gain access to the workspace with the assigned role and permissions.
## Preconditions
- Workspace exists with at least 1 admin user
- Admin user is logged into the workspace Settings area
- Invitation email service is configured in test mode
- Target email address ("newmember@example.com") is not already a workspace member
- Workspace is not at member limit
## Steps
| # | Action | Expected Result |
|---|--------|-----------------|
| 1 | Navigate to Settings > Team Members page | Team members list displays with current members and their roles |
| 2 | Click "Invite Member" button | Invitation form appears with email field and role dropdown |
| 3 | Enter "newmember@example.com" in email field | Email field validates format, no error shown |
| 4 | Select "Editor" from role dropdown | Role selection highlights "Editor" with permission summary tooltip |
| 5 | Click "Send Invitation" | Success toast: "Invitation sent to newmember@example.com". Member appears in list with status "Invited" |
| 6 | Open invitee's email inbox | Invitation email received with workspace name and "Accept Invitation" button |
| 7 | Click "Accept Invitation" link in email | Browser opens account creation page (or login page if account exists) |
| 8 | Complete account creation with name and password | Account created, redirects to workspace dashboard |
| 9 | Verify workspace dashboard access | Dashboard loads with workspace content visible, "Editor" badge in profile menu |
| 10 | Return to admin's Team Members page | New member shows status "Active" with role "Editor" |
## Postconditions
- Invitation record created with status "accepted" and acceptance timestamp
- New user account linked to workspace with "Editor" role
- Invitation email marked as used — re-clicking link shows "Already accepted" message
- Audit log entry created with action "team.member_invited" (admin) and "team.invitation_accepted" (invitee)
- Workspace member count incremented by 1
- Welcome notification sent to new member (in-app)
## Edge Cases
- Invite email already associated with an existing account — verify login flow instead of signup
- Invite with "Admin" role — verify admin permissions granted after acceptance
- Re-invite after previous invitation expired — verify new invitation supersedes old
- Invite when workspace is at member limit — verify error message shown before sending
- Invited user closes browser mid-signup and returns via link later — verify flow resumes
## Notes
- Related TCs: TC-TEAM-002 (change member role), TC-TEAM-003 (deactivate member)
- Dependencies: Email delivery service in test mode, invitation token service
- Known issues: Invitation emails may take up to 1 minute in test environments

View File

@@ -0,0 +1,54 @@
# Team Management — Regression Suite
## Overview
- Feature: Team Management
- Module: TEAM
- Total test cases: 5
- Estimated execution: ~28 minutes
- Last generated: 2026-04-01
- Commit: abc1234
## Smoke Test Subset
| Priority | TC ID | Title | Est. Time |
|----------|-------|-------|-----------|
| high | TC-TEAM-001 | Invite and onboard new team member | ~5m |
| high | TC-TEAM-003 | Deactivate team member | ~5m |
**Smoke total: ~10 minutes**
## Full Regression
### High Priority
| TC ID | Title | Type | Est. Time |
|-------|-------|------|-----------|
| TC-TEAM-001 | Invite and onboard new team member | functional | ~5m |
| TC-TEAM-003 | Deactivate team member | functional | ~5m |
### Medium Priority
| TC ID | Title | Type | Est. Time |
|-------|-------|------|-----------|
| TC-TEAM-002 | Change member role | functional | ~5m |
| TC-TEAM-004 | Manage team member permissions | functional | ~5m |
### Low Priority
| TC ID | Title | Type | Est. Time |
|-------|-------|------|-----------|
| TC-TEAM-005 | Filter and search team member list | regression | ~3m |
**Full regression total: ~23 minutes**
## Coverage Map
| Area | TCs Covering |
|------|-------------|
| Invitation Flow | TC-TEAM-001 |
| Role Management | TC-TEAM-002 |
| Member Deactivation | TC-TEAM-003 |
| Permission Configuration | TC-TEAM-002, TC-TEAM-004 |
| Member Listing/Search | TC-TEAM-005 |
| Audit Logging | TC-TEAM-001, TC-TEAM-003 |
| Email Notifications | TC-TEAM-001, TC-TEAM-003 |
## Gaps
- Bulk member import via CSV — feature exists but UI is in beta, deferred
- SSO/SAML integration — separate authentication feature, not team management
- Member activity reporting — read-only dashboard, low testing value

View File

@@ -0,0 +1,64 @@
```markdown
# {Project Name} — Test Case Coverage Map
## Overview
- Project: {project name}
- Total features: {N} covered
- Total test cases: {N}
- Estimated full regression: ~{X} minutes
- Last updated: {YYYY-MM-DD}
- Commit: {commit-hash}
## Portal Summary
### {Portal Name} ({N} features, {M} TCs, ~{X}m)
| Feature | Module | TCs | High | Med | Low | Smoke | Est. Time |
|---------|--------|-----|------|-----|-----|-------|-----------|
| {Feature Name} | {MOD} | {N} | {h} | {m} | {l} | {smoke count} | ~{X}m |
## Project-Wide Smoke Suite
{Minimum TCs across ALL features for quick project-level sanity check}
| Portal | TC ID | Feature | Title | Est. Time |
|--------|-------|---------|-------|-----------|
| {portal} | TC-{MOD}-{NNN} | {feature} | {title} | ~{N}m |
**Project smoke total: ~{X} minutes**
## Cross-Feature Coverage
{Areas that span multiple features — verify these when cross-cutting changes are made}
| Cross-Cutting Area | Features Involved | TCs Covering |
|-------------------|-------------------|-------------|
| {e.g., Payment Processing} | {Order Mgmt, Invoice Mgmt} | TC-ORD-001, TC-INV-003 |
## Priority Distribution
| Priority | Count | Percentage |
|----------|-------|-----------|
| High | {N} | {X}% |
| Medium | {N} | {X}% |
| Low | {N} | {X}% |
## Uncovered Areas
{Features or sub-areas without test cases — flagged for future work}
- {uncovered area} — {reason: not yet generated / out of scope / deferred}
## Phantom Features (Backend-Only)
{Backend endpoints with no frontend exposure — skipped during generation. Populated from _meta.md data when available.}
- {controller/endpoint group} — {reason: platform API / webhook / deprecated / sub-service}
## Test Data Requirements
{Aggregate test data needs across all features. Populated from _meta.md Test Data Requirements sections when available.}
- {e.g., "Stripe test mode with valid API keys (Order Mgmt, Invoice Mgmt)"}
- {e.g., "At least 2 published products with inventory (Order Mgmt, Product Mgmt)"}
```
**Portal Summary** groups features by application/portal for QA team assignment. Each portal section includes aggregate stats. Portal names come from `_meta.md` `portal` field when available, or default to "General" when features were generated in standalone mode.
**Project-Wide Smoke Suite** selects the highest-priority TCs from each feature's smoke subset — typically 1-2 per feature. A QA tester should be able to run the project smoke suite in under 30 minutes.
**Cross-Feature Coverage** identifies shared concerns (payment, email, auth, inventory) and which TCs from different features exercise them. Useful when a cross-cutting change is made — QA knows exactly which TCs to re-run. Built by scanning postconditions across all regression suites for shared keywords.
**Phantom Features** documents what was NOT covered and why. Populated from `_meta.md` data (pipeline mode). In standalone mode, populated from phantom detection results. If no phantom data is available, omit this section.
**Test Data Requirements** consolidates prerequisites across all features so QA can set up a test environment once. Populated from `_meta.md` `## Test Data Requirements` sections. If no _meta.md data is available, omit this section.

View File

@@ -0,0 +1,63 @@
```markdown
# {Feature Name} — Regression Suite
## Overview
- Feature: {feature name}
- Module: {module abbreviation}
- Total test cases: {N}
- Estimated execution: ~{X} minutes
- Last updated: {YYYY-MM-DD}
- Commit: {commit-hash}
## Smoke Test Subset
{Minimum set of TCs that cover critical paths — run these for quick sanity checks}
| Priority | TC ID | Title | Est. Time |
|----------|-------|-------|-----------|
| high | TC-{MOD}-{NNN} | {title} | ~{N}m |
**Smoke total: ~{X} minutes**
## Full Regression
### High Priority
| TC ID | Title | Type | Est. Time |
|-------|-------|------|-----------|
| TC-{MOD}-{NNN} | {title} | {type} | ~{N}m |
### Medium Priority
| TC ID | Title | Type | Est. Time |
|-------|-------|------|-----------|
| TC-{MOD}-{NNN} | {title} | {type} | ~{N}m |
### Low Priority
| TC ID | Title | Type | Est. Time |
|-------|-------|------|-----------|
| TC-{MOD}-{NNN} | {title} | {type} | ~{N}m |
**Full regression total: ~{X} minutes**
## Coverage Map
{Which areas of the feature each TC exercises}
| Area | TCs Covering |
|------|-------------|
| {sub-area} | TC-{MOD}-001, TC-{MOD}-003 |
## Gaps
{Areas of the feature NOT covered by any test case — flagged for future work}
- {uncovered area — reason}
```
**Smoke test subset** picks TCs that cover the highest-risk paths in minimum time. Typically 2-4 TCs per feature. A QA tester should be able to run the smoke suite in under 15 minutes.
**Execution time estimates** based on step count:
- Simple flow (3-5 steps): ~3 minutes
- Medium flow (6-10 steps): ~5 minutes
- Complex flow (11+ steps): ~8-10 minutes
**Coverage map** cross-references TCs against feature sub-areas. Helps QA identify which TCs to re-run when a specific area changes. Sub-areas are derived from Web Layer entry points discovered during analysis.
**Gaps section** flags areas the skill identified but chose not to generate TCs for — either explicitly excluded during checkpoint or insufficient code detail for generation.
**Commit** tracks which code version was analyzed. Compare against current HEAD to detect when regression suite may be stale.

View File

@@ -0,0 +1,65 @@
```markdown
---
id: TC-{MODULE}-{NNN}
title: "{flow description}"
feature: "{feature name}"
priority: high|medium|low
type: functional|regression|smoke|e2e|edge-case
status: draft
tags: ["{tag1}", "{tag2}"]
commit: {commit-hash}
---
# {Title}
## Objective
{What this test verifies — 1-2 sentences describing the user goal and what the test proves}
## Preconditions
- {User role and permissions required}
- {System state required before starting — e.g., "at least one product exists in catalog"}
- {Test data requirements — e.g., "valid credit card in Stripe test mode"}
- {Navigation starting point — e.g., "user is logged into Admin portal"}
## Steps
| # | Action | Expected Result |
|---|--------|-----------------|
| 1 | {user action — Navigate to, Click, Enter, Select, Submit} | {observable outcome — page loads, form appears, message displays} |
| 2 | {next user action} | {what user sees or what changes} |
| 3 | {next user action} | {confirmation, redirect, updated state} |
## Postconditions
{Side effects to verify AFTER the flow completes — sourced from domain events and integration points}
- {e.g., "Order confirmation email sent to customer email address"}
- {e.g., "Inventory quantity decremented for purchased items"}
- {e.g., "Audit log entry created with action 'order.created'"}
## Edge Cases
{Variant scenarios worth separate attention — each could become its own TC if important enough}
- {e.g., "Order with mixed digital and physical products"}
- {e.g., "Payment fails after order created — verify rollback"}
## Notes
- Related TCs: {cross-references to other TCs in this module}
- Dependencies: {external system dependencies — payment gateway, email service}
- Known issues: {documented bugs or limitations affecting this flow}
```
**Frontmatter fields** align with what `test-case-locator` greps for (`id`, `title`, `priority`, `status`, `type`, `tags`). Always populate all fields — the locator agent extracts them for coverage reporting. The `commit` field tracks which code version was analyzed to produce this TC — used for staleness detection on regeneration.
**Steps table rules:**
- Actions use imperative verbs from the user's perspective: Navigate, Click, Enter, Select, Submit, Drag, Upload, Scroll
- Expected results describe what the user OBSERVES — visible UI changes, messages, redirects — not internal state
- Keep each row to one atomic action. Multi-step actions (fill form -> submit) split into separate rows
- Number steps sequentially — branching flows (if X then Y) become separate TCs
**Postconditions sourced from:**
- Domain events (e.g., `OrderCreatedEvent` -> "confirmation email sent")
- Message handlers (e.g., `InventoryReservationHandler` -> "inventory reserved")
- Webhook dispatches (e.g., `FulfillmentWebhookPublisher` -> "fulfillment notified")
- Audit log entries, cache invalidations, CRM syncs
**Priority definitions:**
- **high**: Core happy path, payment/money flows, data integrity, security-critical
- **medium**: Alternative paths, common edge cases, permission boundaries
- **low**: Rare edge cases, cosmetic validation, error message wording