42 KiB
Full Project Scan Instructions
This workflow performs complete project documentation (Steps 1-12) Called by: document-project/instructions.md router Handles: initial_scan and full_rescan modes
DATA LOADING STRATEGY - Understanding the Documentation Requirements System:Display explanation to user:
How Project Type Detection Works:
This workflow uses a single comprehensive CSV file to intelligently document your project:
documentation-requirements.csv ({documentation_requirements_csv})
- Contains 12 project types (web, mobile, backend, cli, library, desktop, game, data, extension, infra, embedded)
- 24-column schema combining project type detection AND documentation requirements
- Detection columns: project_type_id, key_file_patterns (used to identify project type from codebase)
- Requirement columns: requires_api_scan, requires_data_models, requires_ui_components, etc.
- Pattern columns: critical_directories, test_file_patterns, config_patterns, etc.
- Acts as a "scan guide" - tells the workflow WHERE to look and WHAT to document
- Example: For project_type_id="web", key_file_patterns includes "package.json;tsconfig.json;*.config.js" and requires_api_scan=true
When Documentation Requirements are Loaded:
- Fresh Start (initial_scan): Load all 12 rows → detect type using key_file_patterns → use that row's requirements
- Resume: Load ONLY the doc requirements row(s) for cached project_type_id(s)
- Full Rescan: Same as fresh start (may re-detect project type)
- Deep Dive: Load ONLY doc requirements for the part being deep-dived
Now loading documentation requirements data for fresh start...
Load documentation-requirements.csv from: {documentation_requirements_csv} Store all 12 rows indexed by project_type_id for project detection and requirements lookup Display: "Loaded documentation requirements for 12 project types (web, mobile, backend, cli, library, desktop, game, data, extension, infra, embedded)"
Display: "✓ Documentation requirements loaded successfully. Ready to begin project analysis."
Check if {output_folder}/index.md exists Read existing index.md to extract metadata (date, project structure, parts count) Store as {{existing_doc_date}}, {{existing_structure}}I found existing documentation generated on {{existing_doc_date}}.
What would you like to do?
- Re-scan entire project - Update all documentation with latest changes
- Deep-dive into specific area - Generate detailed documentation for a particular feature/module/folder
- Cancel - Keep existing documentation as-is
Your choice [1/2/3]:
Set workflow_mode = "full_rescan" Continue to scan level selection below Set workflow_mode = "deep_dive" Set scan_level = "exhaustive" Initialize state file with mode=deep_dive, scan_level=exhaustive Jump to Step 13 Display message: "Keeping existing documentation. Exiting workflow." Exit workflow Set workflow_mode = "initial_scan" Continue to scan level selection belowSelect Scan Level
Choose your scan depth level:1. Quick Scan (2-5 minutes) [DEFAULT]
- Pattern-based analysis without reading source files
- Scans: Config files, package manifests, directory structure
- Best for: Quick project overview, initial understanding
- File reading: Minimal (configs, README, package.json, etc.)
2. Deep Scan (10-30 minutes)
- Reads files in critical directories based on project type
- Scans: All critical paths from documentation requirements
- Best for: Comprehensive documentation for brownfield PRD
- File reading: Selective (key files in critical directories)
3. Exhaustive Scan (30-120 minutes)
- Reads ALL source files in project
- Scans: Every source file (excludes node_modules, dist, build)
- Best for: Complete analysis, migration planning, detailed audit
- File reading: Complete (all source files)
Your choice [1/2/3] (default: 1):
Set scan_level = "quick" Display: "Using Quick Scan (pattern-based, no source file reading)" Set scan_level = "deep" Display: "Using Deep Scan (reading critical files per project type)" Set scan_level = "exhaustive" Display: "Using Exhaustive Scan (reading all source files)"Initialize state file: {output_folder}/project-scan-report.json Every time you touch the state file, record: step id, human-readable summary (what you actually did), precise timestamp, and any outputs written. Vague phrases are unacceptable. Write initial state: { "workflow_version": "1.2.0", "timestamps": {"started": "{{current_timestamp}}", "last_updated": "{{current_timestamp}}"}, "mode": "{{workflow_mode}}", "scan_level": "{{scan_level}}", "project_root": "{{project_root_path}}", "output_folder": "{{output_folder}}", "completed_steps": [], "current_step": "step_1", "findings": {}, "outputs_generated": ["project-scan-report.json"], "resume_instructions": "Starting from step 1" } Continue with standard workflow from Step 1
Ask user: "What is the root directory of the project to document?" (default: current working directory) Store as {{project_root_path}}Scan {{project_root_path}} for key indicators:
- Directory structure (presence of client/, server/, api/, src/, app/, etc.)
- Key files (package.json, go.mod, requirements.txt, etc.)
- Technology markers matching detection_keywords from project-types.csv
Detect if project is:
- Monolith: Single cohesive codebase
- Monorepo: Multiple parts in one repository
- Multi-part: Separate client/server or similar architecture
Is this correct? Should I document each part separately? [y/n]
Set repository_type = "monorepo" or "multi-part" For each detected part: - Identify root path - Run project type detection using key_file_patterns from documentation-requirements.csv - Store as part in project_parts array
Ask user to specify correct parts and their paths
Set repository_type = "monolith" Create single part in project_parts array with root_path = {{project_root_path}} Run project type detection using key_file_patterns from documentation-requirements.csvFor each part, match detected technologies and file patterns against key_file_patterns column in documentation-requirements.csv Assign project_type_id to each part Load corresponding documentation_requirements row for each part
I've classified this project: {{project_classification_summary}}
Does this look correct? [y/n/edit]
project_structure project_parts_metadata
IMMEDIATELY update state file with step completion:
- Add to completed_steps: {"step": "step_1", "status": "completed", "timestamp": "{{now}}", "summary": "Classified as {{repository_type}} with {{parts_count}} parts"}
- Update current_step = "step_2"
- Update findings.project_classification with high-level summary only
- CACHE project_type_id(s): Add project_types array: [{"part_id": "{{part_id}}", "project_type_id": "{{project_type_id}}", "display_name": "{{display_name}}"}]
- This cached data prevents reloading all CSV files on resume - we can load just the needed documentation_requirements row(s)
- Update last_updated timestamp
- Write state file
PURGE detailed scan results from memory, keep only summary: "{{repository_type}}, {{parts_count}} parts, {{primary_tech}}"
For each part, scan for existing documentation using patterns: - README.md, README.rst, README.txt - CONTRIBUTING.md, CONTRIBUTING.rst - ARCHITECTURE.md, ARCHITECTURE.txt, docs/architecture/ - DEPLOYMENT.md, DEPLOY.md, docs/deployment/ - API.md, docs/api/ - Any files in docs/, documentation/, .github/ foldersCreate inventory of existing_docs with:
- File path
- File type (readme, architecture, api, etc.)
- Which part it belongs to (if multi-part)
I found these existing documentation files: {{existing_docs_list}}
Are there any other important documents or key areas I should focus on while analyzing this project? [Provide paths or guidance, or type 'none']
Store user guidance as {{user_context}}
existing_documentation_inventory user_provided_context
Update state file:
- Add to completed_steps: {"step": "step_2", "status": "completed", "timestamp": "{{now}}", "summary": "Found {{existing_docs_count}} existing docs"}
- Update current_step = "step_3"
- Update last_updated timestamp
PURGE detailed doc contents from memory, keep only: "{{existing_docs_count}} docs found"
For each part in project_parts: - Load key_file_patterns from documentation_requirements - Scan part root for these patterns - Parse technology manifest files (package.json, go.mod, requirements.txt, etc.) - Extract: framework, language, version, database, dependencies - Build technology_table with columns: Category, Technology, Version, JustificationDetermine architecture pattern based on detected tech stack:
- Use project_type_id as primary indicator (e.g., "web" → layered/component-based, "backend" → service/API-centric)
- Consider framework patterns (e.g., React → component hierarchy, Express → middleware pipeline)
- Note architectural style in technology table
- Store as {{architecture_pattern}} for each part
technology_stack architecture_patterns
Update state file:
- Add to completed_steps: {"step": "step_3", "status": "completed", "timestamp": "{{now}}", "summary": "Tech stack: {{primary_framework}}"}
- Update current_step = "step_4"
- Update findings.technology_stack with summary per part
- Update last_updated timestamp
PURGE detailed tech analysis from memory, keep only: "{{framework}} on {{language}}"
BATCHING STRATEGY FOR DEEP/EXHAUSTIVE SCANS
This step requires file reading. Apply batching strategy:Identify subfolders to process based on: - scan_level == "deep": Use critical_directories from documentation_requirements - scan_level == "exhaustive": Get ALL subfolders recursively (excluding node_modules, .git, dist, build, coverage)
For each subfolder to scan: 1. Read all files in subfolder (consider file size - use judgment for files >5000 LOC) 2. Extract required information based on conditional flags below 3. IMMEDIATELY write findings to appropriate output file 4. Validate written document (section-level validation) 5. Update state file with batch completion 6. PURGE detailed findings from context, keep only 1-2 sentence summary 7. Move to next subfolder
Track batches in state file: findings.batches_completed: [ {"path": "{{subfolder_path}}", "files_scanned": {{count}}, "summary": "{{brief_summary}}"} ]
Use pattern matching only - do NOT read source files Use glob/grep to identify file locations and patterns Extract information from filenames, directory structure, and config files onlyFor each part, check documentation_requirements boolean flags and execute corresponding scans:
Scan for API routes and endpoints using integration_scan_patterns Look for: controllers/, routes/, api/, handlers/, endpoints/ Use glob to find route files, extract patterns from filenames and folder structure Read files in batches (one subfolder at a time) Extract: HTTP methods, paths, request/response types from actual codeBuild API contracts catalog IMMEDIATELY write to: {outputfolder}/api-contracts-{part_id}.md Validate document has all required sections Update state file with output generated PURGE detailed API data, keep only: "{{api_count}} endpoints documented" api_contracts{part_id}
Scan for data models using schema_migration_patterns Look for: models/, schemas/, entities/, migrations/, prisma/, ORM configs Identify schema files via glob, parse migration file names for table discovery Read model files in batches (one subfolder at a time) Extract: table names, fields, relationships, constraints from actual codeBuild database schema documentation IMMEDIATELY write to: {outputfolder}/data-models-{part_id}.md Validate document completeness Update state file with output generated PURGE detailed schema data, keep only: "{{table_count}} tables documented" data_models{part_id}
Analyze state management patterns Look for: Redux, Context API, MobX, Vuex, Pinia, Provider patterns Identify: stores, reducers, actions, state structure state_management_patterns_{part_id} Inventory UI component library Scan: components/, ui/, widgets/, views/ folders Categorize: Layout, Form, Display, Navigation, etc. Identify: Design system, component patterns, reusable elements ui_component_inventory_{part_id} Look for hardware schematics using hardware_interface_patterns This appears to be an embedded/hardware project. Do you have: - Pinout diagrams - Hardware schematics - PCB layouts - Hardware documentationIf yes, please provide paths or links. [Provide paths or type 'none'] Store hardware docs references hardwaredocumentation{part_id}
Scan and catalog assets using asset_patterns Categorize by: Images, Audio, 3D Models, Sprites, Textures, etc. Calculate: Total size, file counts, formats used asset_inventory_{part_id}Scan for additional patterns based on doc requirements:
- config_patterns → Configuration management
- auth_security_patterns → Authentication/authorization approach
- entry_point_patterns → Application entry points and bootstrap
- shared_code_patterns → Shared libraries and utilities
- async_event_patterns → Event-driven architecture
- ci_cd_patterns → CI/CD pipeline details
- localization_patterns → i18n/l10n support
Apply scan_level strategy to each pattern scan (quick=glob only, deep/exhaustive=read files)
comprehensiveanalysis{part_id}
Update state file:
- Add to completed_steps: {"step": "step_4", "status": "completed", "timestamp": "{{now}}", "summary": "Conditional analysis complete, {{files_generated}} files written"}
- Update current_step = "step_5"
- Update last_updated timestamp
- List all outputs_generated
PURGE all detailed scan results from context. Keep only summaries:
- "APIs: {{api_count}} endpoints"
- "Data: {{table_count}} tables"
- "Components: {{component_count}} components"
Annotate the tree with:
- Purpose of each critical directory
- Entry points marked
- Key file locations highlighted
- Integration points noted (for multi-part projects)
Show how parts are organized and where they interface
Create formatted source tree with descriptions:
project-root/
├── client/ # React frontend (Part: client)
│ ├── src/
│ │ ├── components/ # Reusable UI components
│ │ ├── pages/ # Route-based pages
│ │ └── api/ # API client layer → Calls server/
├── server/ # Express API backend (Part: api)
│ ├── src/
│ │ ├── routes/ # REST API endpoints
│ │ ├── models/ # Database models
│ │ └── services/ # Business logic
source_tree_analysis critical_folders_summary
IMMEDIATELY write source-tree-analysis.md to disk Validate document structure Update state file:
- Add to completed_steps: {"step": "step_5", "status": "completed", "timestamp": "{{now}}", "summary": "Source tree documented"}
- Update current_step = "step_6"
- Add output: "source-tree-analysis.md" PURGE detailed tree from context, keep only: "Source tree with {{folder_count}} critical folders"
Look for deployment configuration using ci_cd_patterns:
- Dockerfile, docker-compose.yml
- Kubernetes configs (k8s/, helm/)
- CI/CD pipelines (.github/workflows/, .gitlab-ci.yml)
- Deployment scripts
- Infrastructure as Code (terraform/, pulumi/)
development_instructions deployment_configuration contribution_guidelines
Update state file:
- Add to completed_steps: {"step": "step_6", "status": "completed", "timestamp": "{{now}}", "summary": "Dev/deployment guides written"}
- Update current_step = "step_7"
- Add generated outputs to list PURGE detailed instructions, keep only: "Dev setup and deployment documented"
Create integration_points array with:
- from: source part
- to: target part
- type: REST API, GraphQL, gRPC, Event Bus, etc.
- details: Endpoints, protocols, data formats
IMMEDIATELY write integration-architecture.md to disk Validate document completeness
integration_architecture
Update state file:
- Add to completed_steps: {"step": "step_7", "status": "completed", "timestamp": "{{now}}", "summary": "Integration architecture documented"}
- Update current_step = "step_8" PURGE integration details, keep only: "{{integration_count}} integration points"
For each architecture file generated:
- IMMEDIATELY write architecture file to disk
- Validate against architecture template schema
- Update state file with output
- PURGE detailed architecture from context, keep only: "Architecture for {{part_id}} written"
architecture_document
Update state file:
- Add to completed_steps: {"step": "step_8", "status": "completed", "timestamp": "{{now}}", "summary": "Architecture docs written for {{parts_count}} parts"}
- Update current_step = "step_9"
Generate source-tree-analysis.md with:
- Full annotated directory tree from Step 5
- Critical folders explained
- Entry points documented
- Multi-part structure (if applicable)
IMMEDIATELY write project-overview.md to disk Validate document sections
Generate source-tree-analysis.md (if not already written in Step 5) IMMEDIATELY write to disk and validate
Generate component-inventory.md (or per-part versions) with:
- All discovered components from Step 4
- Categorized by type
- Reusable vs specific components
- Design system elements (if found) IMMEDIATELY write each component inventory to disk and validate
Generate development-guide.md (or per-part versions) with:
- Prerequisites and dependencies
- Environment setup instructions
- Local development commands
- Build process
- Testing approach and commands
- Common development tasks IMMEDIATELY write each development guide to disk and validate
Generate project-parts.json metadata file:
json { "repository_type": "monorepo", "parts": [ ... ], "integration_points": [ ... ] }
IMMEDIATELY write to disk
supporting_documentation
Update state file:
- Add to completed_steps: {"step": "step_9", "status": "completed", "timestamp": "{{now}}", "summary": "All supporting docs written"}
- Update current_step = "step_10"
- List all newly generated outputs
PURGE all document contents from context, keep only list of files generated
INCOMPLETE DOCUMENTATION MARKER CONVENTION: When a document SHOULD be generated but wasn't (due to quick scan, missing data, conditional requirements not met):
- Use EXACTLY this marker: (To be generated)
- Place it at the end of the markdown link line
- Example: - API Contracts - Server (To be generated)
- This allows Step 11 to detect and offer to complete these items
- ALWAYS use this exact format for consistency and automated detection
Create index.md with intelligent navigation based on project structure
Generate simple index with: - Project name and type - Quick reference (tech stack, architecture type) - Links to all generated docs - Links to discovered existing docs - Getting started section Generate comprehensive index with: - Project overview and structure summary - Part-based navigation section - Quick reference by part - Cross-part integration links - Links to all generated and existing docs - Getting started per partInclude in index.md:
Project Documentation Index
Project Overview
- Type: {{repository_type}} {{#if multi-part}}with {{parts.length}} parts{{/if}}
- Primary Language: {{primary_language}}
- Architecture: {{architecture_type}}
Quick Reference
{{#if single_part}}
- Tech Stack: {{tech_stack_summary}}
- Entry Point: {{entry_point}}
- Architecture Pattern: {{architecture_pattern}} {{else}} {{#each parts}}
{{part_name}} ({{part_id}})
- Type: {{project_type}}
- Tech Stack: {{tech_stack}}
- Root: {{root_path}} {{/each}} {{/if}}
Generated Documentation
- Project Overview
- [Architecture](./architecture{{#if multi-part}}-{part*id}{{/if}}.md){{#unless architecture_file_exists}} *(To be generated)_{{/unless}}
- Source Tree Analysis
- [Component Inventory](./component-inventory{{#if multi-part}}-{part*id}{{/if}}.md){{#unless component_inventory_exists}} *(To be generated)_{{/unless}}
- [Development Guide](./development-guide{{#if multi-part}}-{part*id}{{/if}}.md){{#unless dev_guide_exists}} *(To be generated)_{{/unless}} {{#if deployment_found}}- Deployment Guide{{#unless deployment_guide_exists}} (To be generated){{/unless}}{{/if}} {{#if contribution_found}}- Contribution Guide{{/if}} {{#if api_documented}}- [API Contracts](./api-contracts{{#if multi-part}}-{part_id}{{/if}}.md){{#unless api_contracts_exists}} (To be generated){{/unless}}{{/if}} {{#if data_models_documented}}- [Data Models](./data-models{{#if multi-part}}-{part_id}{{/if}}.md){{#unless data_models_exists}} (To be generated){{/unless}}{{/if}} {{#if multi-part}}- Integration Architecture{{#unless integration_arch_exists}} _(To be generated)_{{/unless}}{{/if}}
Existing Documentation
{{#each existing_docs}}
- {{title}} - {{description}} {{/each}}
Getting Started
{{getting_started_instructions}}
Before writing index.md, check which expected files actually exist:
- For each document that should have been generated, check if file exists on disk
- Set existence flags: architecture_file_exists, component_inventory_exists, dev_guide_exists, etc.
- These flags determine whether to add the (To be generated) marker
- Track which files are missing in {{missing_docs_list}} for reporting
IMMEDIATELY write index.md to disk with appropriate (To be generated) markers for missing files Validate index has all required sections and links are valid
index
Update state file:
- Add to completed_steps: {"step": "step_10", "status": "completed", "timestamp": "{{now}}", "summary": "Master index generated"}
- Update current_step = "step_11"
- Add output: "index.md"
PURGE index content from context
Show summary of all generated files: Generated in {{output_folder}}/: {{file_list_with_sizes}}Run validation checklist from {validation}
INCOMPLETE DOCUMENTATION DETECTION:
- PRIMARY SCAN: Look for exact marker: (To be generated)
- FALLBACK SCAN: Look for fuzzy patterns (in case agent was lazy):
- (TBD)
- (TODO)
- (Coming soon)
- (Not yet generated)
- (Pending)
- Extract document metadata from each match for user selection
Read {output_folder}/index.md
Scan for incomplete documentation markers: Step 1: Search for exact pattern "(To be generated)" (case-sensitive) Step 2: For each match found, extract the entire line Step 3: Parse line to extract:
- Document title (text within [brackets] or bold)
- File path (from markdown link or inferable from title)
- Document type (infer from filename: architecture, api-contracts, data-models, component-inventory, development-guide, deployment-guide, integration-architecture)
- Part ID if applicable (extract from filename like "architecture-server.md" → part_id: "server") Step 4: Add to {{incomplete_docs_strict}} array
Fallback fuzzy scan for alternate markers: Search for patterns: (TBD), (TODO), (Coming soon), (Not yet generated), (Pending) For each fuzzy match:
- Extract same metadata as strict scan
- Add to {{incomplete_docs_fuzzy}} array with fuzzy_match flag
Combine results: Set {{incomplete_docs_list}} = {{incomplete_docs_strict}} + {{incomplete_docs_fuzzy}} For each item store structure: { "title": "Architecture – Server", "file*path": "./architecture-server.md", "doc_type": "architecture", "part_id": "server", "line_text": "- Architecture – Server *(To be generated)_", "fuzzy_match": false }
Documentation generation complete!
Summary:
- Project Type: {{project_type_summary}}
- Parts Documented: {{parts_count}}
- Files Generated: {{files_count}}
- Total Lines: {{total_lines}}
{{#if incomplete_docs_list.length > 0}} ⚠️ Incomplete Documentation Detected:
I found {{incomplete_docs_list.length}} item(s) marked as incomplete:
{{#each incomplete_docs_list}} {{@index + 1}}. {{title}} ({{doc_type}}{{#if part_id}} for {{part_id}}{{/if}}){{#if fuzzy_match}} ⚠️ [non-standard marker]{{/if}} {{/each}}
{{/if}}
Would you like to:
{{#if incomplete_docs_list.length > 0}}
- Generate incomplete documentation - Complete any of the {{incomplete_docs_list.length}} items above
- Review any specific section [type section name]
- Add more detail to any area [type area name]
- Generate additional custom documentation [describe what]
- Finalize and complete [type 'done'] {{else}}
- Review any specific section [type section name]
- Add more detail to any area [type area name]
- Generate additional documentation [describe what]
- Finalize and complete [type 'done'] {{/if}}
Your choice:
Which incomplete items would you like to generate?{{#each incomplete_docs_list}} {{@index + 1}}. {{title}} ({{doc_type}}{{#if part_id}} - {{part_id}}{{/if}}) {{/each}} {{incomplete_docs_list.length + 1}}. All of them
Enter number(s) separated by commas (e.g., "1,3,5"), or type 'all':
Parse user selection:
-
If "all", set {{selected_items}} = all items in {{incomplete_docs_list}}
-
If comma-separated numbers, extract selected items by index
-
Store result in {{selected_items}} array
Display: "Generating {{selected_items.length}} document(s)..."
For each item in {{selected_items}}:
-
Identify the part and requirements:
- Extract part_id from item (if exists)
- Look up part data in project_parts array from state file
- Load documentation_requirements for that part's project_type_id
-
Route to appropriate generation substep based on doc_type:
If doc_type == "architecture":
- Display: "Generating architecture documentation for {{part_id}}..."
- Load architecture_match for this part from state file (Step 3 cache)
- Re-run Step 8 architecture generation logic ONLY for this specific part
- Use matched template and fill with cached data from state file
- Write architecture-{{part_id}}.md to disk
- Validate completeness
If doc_type == "api-contracts":
- Display: "Generating API contracts for {{part_id}}..."
- Load part data and documentation_requirements
- Re-run Step 4 API scan substep targeting ONLY this part
- Use scan_level from state file (quick/deep/exhaustive)
- Generate api-contracts-{{part_id}}.md
- Validate document structure
If doc_type == "data-models":
- Display: "Generating data models documentation for {{part_id}}..."
- Re-run Step 4 data models scan substep targeting ONLY this part
- Use schema_migration_patterns from documentation_requirements
- Generate data-models-{{part_id}}.md
- Validate completeness
If doc_type == "component-inventory":
- Display: "Generating component inventory for {{part_id}}..."
- Re-run Step 9 component inventory generation for this specific part
- Scan components/, ui/, widgets/ folders
- Generate component-inventory-{{part_id}}.md
- Validate structure
If doc_type == "development-guide":
- Display: "Generating development guide for {{part_id}}..."
- Re-run Step 9 development guide generation for this specific part
- Use key_file_patterns and test_file_patterns from documentation_requirements
- Generate development-guide-{{part_id}}.md
- Validate completeness
If doc_type == "deployment-guide":
- Display: "Generating deployment guide..."
- Re-run Step 6 deployment configuration scan
- Re-run Step 9 deployment guide generation
- Generate deployment-guide.md
- Validate structure
If doc_type == "integration-architecture":
- Display: "Generating integration architecture..."
- Re-run Step 7 integration analysis for all parts
- Generate integration-architecture.md
- Validate completeness
-
Post-generation actions:
- Confirm file was written successfully
- Update state file with newly generated output
- Add to {{newly_generated_docs}} tracking list
- Display: "✓ Generated: {{file_path}}"
-
Handle errors:
- If generation fails, log error and continue with next item
- Track failed items in {{failed_generations}} list
After all selected items are processed:
Update index.md to remove markers:
- Read current index.md content
- For each item in {{newly_generated_docs}}:
- Find the line containing the file link and marker
- Remove the (To be generated) or fuzzy marker text
- Leave the markdown link intact
- Write updated index.md back to disk
- Update state file to record index.md modification
Display generation summary:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✓ Documentation Generation Complete!
Successfully Generated: {{#each newly_generated_docs}}
- {{title}} → {{file_path}} {{/each}}
{{#if failed_generations.length > 0}} Failed to Generate: {{#each failed_generations}}
- {{title}} ({{error_message}}) {{/each}} {{/if}}
Updated: index.md (removed incomplete markers)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Update state file with all generation activities
Return to Step 11 menu (loop back to check for any remaining incomplete items)
Make requested modifications and regenerate affected files Proceed to Step 12 completion
Update state file: - Add to completed_steps: {"step": "step_11_iteration", "status": "completed", "timestamp": "{{now}}", "summary": "Review iteration complete"} - Keep current_step = "step_11" (for loop back) - Update last_updated timestamp Loop back to beginning of Step 11 (re-scan for remaining incomplete docs) Update state file: - Add to completed_steps: {"step": "step_11", "status": "completed", "timestamp": "{{now}}", "summary": "Validation and review complete"} - Update current_step = "step_12" Proceed to Step 12 Create final summary report Compile verification recap variables: - Set {{verification_summary}} to the concrete tests, validations, or scripts you executed (or "none run"). - Set {{open_risks}} to any remaining risks or TODO follow-ups (or "none"). - Set {{next_checks}} to recommended actions before merging/deploying (or "none").Display completion message:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Project Documentation Complete! ✓
Location: {{output_folder}}/
Master Index: {{output_folder}}/index.md 👆 This is your primary entry point for AI-assisted development
Generated Documentation: {{generated_files_list}}
Next Steps:
- Review the index.md to familiarize yourself with the documentation structure
- When creating a brownfield PRD, point the PRD workflow to: {{output_folder}}/index.md
- For UI-only features: Reference {{output_folder}}/architecture-{{ui_part_id}}.md
- For API-only features: Reference {{output_folder}}/architecture-{{api_part_id}}.md
- For full-stack features: Reference both part architectures + integration-architecture.md
Verification Recap:
- Tests/extractions executed: {{verification_summary}}
- Outstanding risks or follow-ups: {{open_risks}}
- Recommended next checks before PR: {{next_checks}}
Brownfield PRD Command: When ready to plan new features, run the PRD workflow and provide this index as input.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
FINALIZE state file:
- Add to completed_steps: {"step": "step_12", "status": "completed", "timestamp": "{{now}}", "summary": "Workflow complete"}
- Update timestamps.completed = "{{now}}"
- Update current_step = "completed"
- Write final state file
Display: "State file saved: {{output_folder}}/project-scan-report.json"