Compare commits

..

4 Commits

Author SHA1 Message Date
admin a723cae244 690320:2126 UUID agian by Claude Sonnet #02
Build and Deploy / deploy (push) Failing after 4m27s
2026-03-20 21:26:23 +07:00
admin 90cbbb8f11 690320:2053 UUID agian #01
Build and Deploy / deploy (push) Failing after 2m38s
2026-03-20 20:53:50 +07:00
admin e3859c8349 690320:2026 login #02
Build and Deploy / deploy (push) Successful in 6m50s
2026-03-20 20:26:03 +07:00
admin bac263c097 690320:2012 login #01
Build and Deploy / deploy (push) Successful in 6m43s
2026-03-20 20:12:03 +07:00
1201 changed files with 36285 additions and 170080 deletions
+78
View File
@@ -0,0 +1,78 @@
---
trigger: always_on
---
# Project Specifications & Context Protocol
Description: Enforces strict adherence to the project's documentation structure for all agent activities.
Globs: \*
---
## Agent Role
You are a Principal Engineer and Architect strictly bound by the project's documentation. You do not improvise outside of the defined specifications.
## The Context Loading Protocol
Before generating code or planning a solution, you MUST conceptually load the context in this specific order:
1. **📖 PROJECT CONTEXT (`specs/00-Overview/`)**
- _Action:_ Align with the high-level goals and domain language described here.
2. **✅ REQUIREMENTS (`specs/01-Requirements/`)**
- _Action:_ Verify that your plan satisfies the functional requirements and user stories.
- _Constraint:_ If a requirement is ambiguous, stop and ask.
3. **🏗 ARCHITECTURE & DECISIONS (`specs/02-Architecture/` & `specs/06-Decision-Records/`)**
- _Action:_ Adhere to the defined system design.
- _Crucial:_ Check `specs/06-Decision-Records/` (ADRs) to ensure you do not violate previously agreed-upon technical decisions.
4. **💾 DATABASE & SCHEMA (`specs/03-Data-and-Storage/`)**
- _Action:_
- **Read `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`** for exact table structures and constraints. (Schema split: `01-drop`, `02-tables`, `03-views-indexes`)
- **Consult `specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
- **Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-basic.sql`** to understand initial data states.
- **Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql`** to understand initial permissions states.
- **Check `specs/03-Data-and-Storage/03-04-legacy-data-migration.md`** for migration context (ADR-017).
- **Check `specs/03-Data-and-Storage/03-05-n8n-migration-setup-guide.md`** for n8n workflow setup.
- _Constraint:_ NEVER invent table names or columns. Use ONLY what is defined here.
5. **⚙️ IMPLEMENTATION DETAILS (`specs/05-Engineering-Guidelines/`)**
- _Action:_ Follow Tech Stack, Naming Conventions, and Code Patterns.
6. **🚀 OPERATIONS & INFRASTRUCTURE (`specs/04-Infrastructure-OPS/`)**
- _Action:_ Ensure deployability and configuration compliance.
- _Constraint:_ Ensure deployment paths, port mappings, and volume mounts are consistent with this documentation.
## Execution Rules
### 1. Citation Requirement
When proposing a change or writing code, you must explicitly reference the source of truth:
> "Implementing feature X per `specs/01-Requirements/` using pattern defined in `specs/05-Engineering-Guidelines/`."
### 2. Conflict Resolution
- **Spec vs. Training Data:** The `specs/` folder ALWAYS supersedes your general training data.
- **Spec vs. User Prompt:** If a user prompt contradicts `specs/06-Decision-Records/`, warn the user before proceeding.
### 3. File Generation
- Do not create new files outside of the established project structure:
- Backend: `backend/src/modules/<name>/`, `backend/src/common/`
- Frontend: `frontend/app/`, `frontend/components/`, `frontend/hooks/`, `frontend/lib/`
- Specs: `specs/` subdirectories only
- Keep the code style consistent with `specs/05-Engineering-Guidelines/`.
- New modules MUST follow the workflow in `.agents/workflows/create-backend-module.md` or `.agents/workflows/create-frontend-page.md`.
### 4. Schema Changes
- **DO NOT** create or run TypeORM migration files.
- Modify the schema directly in `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` (or `01-drop`/`03-views-indexes` as appropriate).
- Update `specs/03-Data-and-Storage/03-01-data-dictionary.md` if adding/changing columns.
- Notify the user so they can apply the SQL change to the live database manually.
- **AI Isolation (ADR-018):** Ollama runs on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. All writes go through DMS API.
---
+38
View File
@@ -0,0 +1,38 @@
---
trigger: always_on
description: Control which shell commands the agent may run automatically.
allowAuto:
- 'pnpm test:watch'
- 'pnpm test:debug'
- 'pnpm test:e2e'
- 'git status'
- 'git log --oneline'
- 'git diff'
- 'git branch'
- 'tsc --noEmit'
denyAuto:
- 'rm -rf'
- 'Remove-Item'
- 'git push --force'
- 'git reset --hard'
- 'git clean -fd'
- 'curl | bash'
- 'docker compose down'
- 'DROP TABLE'
- 'TRUNCATE'
- 'DELETE FROM'
alwaysReview: true
scopes:
- 'backend/src/**'
- 'backend/test/**'
- 'frontend/app/**'
---
# Execution Rules
- Only auto-execute commands that are explicitly listed in `allowAuto`.
- Commands in `denyAuto` must always be blocked, even if manually requested.
- All shell operations that create, modify, or delete files in `backend/src/`, `backend/test/`, or `frontend/app/` require human review.
- Alert before running any SQL that modifies data (INSERT/UPDATE/DELETE/DROP/TRUNCATE).
- Alert if environment variables related to DB connection or secrets (DATABASE_URL, JWT_SECRET, passwords) would be displayed or logged.
- Never auto-execute commands that expose sensitive credentials via MCP tools or shell output.
+30 -182
View File
@@ -2,7 +2,7 @@
> **The Event Horizon of Software Quality.** > **The Event Horizon of Software Quality.**
> _Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit)._ > _Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit)._
> _Version: 1.8.6 — LCBP3-DMS Edition (v1.8.6 Production Ready)_ > _Version: 1.2.0 — LCBP3-DMS Edition (v1.8.1 UAT Ready)_
--- ---
@@ -55,7 +55,7 @@ Some skills and scripts reference a `.specify/` directory for templates and proj
The toolkit is organized into modular components that provide both the logic (Scripts) and the structure (Templates) for the agent. The toolkit is organized into modular components that provide both the logic (Scripts) and the structure (Templates) for the agent.
```text ```text
.agents/ # Agent Skills & Rules .agents/
├── skills/ # @ Mentions (Agent Intelligence) ├── skills/ # @ Mentions (Agent Intelligence)
│ ├── nestjs-best-practices/ # NestJS Architecture Patterns │ ├── nestjs-best-practices/ # NestJS Architecture Patterns
│ ├── next-best-practices/ # Next.js App Router Patterns │ ├── next-best-practices/ # Next.js App Router Patterns
@@ -78,37 +78,32 @@ The toolkit is organized into modular components that provide both the logic (Sc
│ ├── speckit-tester/ # Test Runner & Coverage │ ├── speckit-tester/ # Test Runner & Coverage
│ └── speckit-validate/ # Implementation Validator │ └── speckit-validate/ # Implementation Validator
├── rules/ # Project Context & Validation Rules ├── workflows/ # / Slash Commands (Orchestration)
│ ├── 00-project-context.md # Role, Persona, Rule Tiers │ ├── 00-speckit-all.md # Full Pipeline (10 steps: Specify → Validate)
│ ├── 01-adr-019-uuid.md # UUID Strategy (Critical) │ ├── 0111-speckit-*.md # Individual phase workflows
│ ├── 02-security.md # Security Requirements │ ├── speckit-prepare.md # Prep Pipeline (5 steps: Specify → Analyze)
│ ├── 03-typescript.md # TypeScript Standards │ ├── schema-change.md # DB Schema Change (ADR-009)
│ ├── 04-domain-terminology.md # DMS Glossary Compliance │ ├── create-backend-module.md # NestJS Module Scaffolding
│ ├── 05-forbidden-actions.md # Critical Prohibited Patterns │ ├── create-frontend-page.md # Next.js Page Scaffolding
│ ├── 06-backend-patterns.md # NestJS Architecture Rules │ ├── deploy.md # Deployment via Gitea CI/CD
── 07-frontend-patterns.md # Next.js App Router Rules ── util-speckit-*.md # Utilities (checklist, diff, migrate, etc.)
│ ├── 08-development-flow.md # Development Workflow
│ ├── 09-commit-checklist.md # Pre-commit Validation
│ ├── 10-error-handling.md # ADR-007 Compliance
│ └── 11-ai-integration.md # ADR-018/020 AI Boundaries
└── scripts/ └── scripts/
├── bash/ # Bash Core (Kinetic logic) ├── bash/ # Bash Core (Kinetic logic)
│ ├── common.sh # Shared utilities & path resolution
│ ├── check-prerequisites.sh # Prerequisite validation
│ ├── create-new-feature.sh # Feature branch creation
│ ├── setup-plan.sh # Plan template setup
│ ├── update-agent-context.sh # Agent file updater (main)
│ ├── plan-parser.sh # Plan data extraction (module)
│ ├── content-generator.sh # Language-specific templates (module)
│ └── agent-registry.sh # 17-agent type registry (module)
├── powershell/ # PowerShell Equivalents (Windows-native) ├── powershell/ # PowerShell Equivalents (Windows-native)
│ ├── common.ps1 # Shared utilities & prerequisites
│ └── create-new-feature.ps1 # Feature branch creation
├── fix_links.py # Spec link fixer ├── fix_links.py # Spec link fixer
├── verify_links.py # Spec link verifier ├── verify_links.py # Spec link verifier
└── start-mcp.js # MCP server launcher └── start-mcp.js # MCP server launcher
.windsurf/workflows/ # / Slash Commands (Orchestration)
├── 00-speckit.all.md # Full Pipeline (10 steps: Specify → Validate)
├── 0111-speckit-*.md # Individual phase workflows
├── speckit-prepare.md # Prep Pipeline (5 steps: Specify → Analyze)
├── schema-change.md # DB Schema Change (ADR-009)
├── create-backend-module.md # NestJS Module Scaffolding
├── create-frontend-page.md # Next.js Page Scaffolding
├── deploy.md # Deployment via Gitea CI/CD
├── review.md # Code Review Workflow
└── util-speckit-*.md # Utilities (checklist, diff, migrate, etc.)
``` ```
--- ---
@@ -259,24 +254,24 @@ If you change your mind mid-project:
--- ---
## 🏗️ LCBP3-DMS Project Notes (v1.8.6) ## 🏗️ LCBP3-DMS Project Notes (v1.8.1)
### 📊 Current Status: Production Ready (2026-04-14) ### 📊 Current Status: UAT Ready (2026-03-11)
| Area | Status | | Area | Status |
| ------------- | ------------------------------- | |------|--------|
| Backend | ✅ 18 Modules, Production Ready | | Backend | ✅ 18 Modules, Production Ready |
| Frontend | ✅ 100% Complete | | Frontend | ✅ 100% Complete |
| Database | ✅ Schema v1.8.6 Stable | | Database | ✅ Schema v1.8.0 Stable |
| Documentation | ✅ **10/10 Gaps Closed** | | Documentation | ✅ **10/10 Gaps Closed** |
| AI Migration | ✅ Ollama Integration Complete | | AI Migration | 🔄 Pre-migration Setup (n8n + Ollama) |
| UAT | ✅ Completed Successfully | | UAT | 🔄 In Progress |
| Deployment | Production Deployed | | Deployment | 📋 Pending Go-Live |
### 📁 Key Spec Files (Always Check Before Writing Code) ### 📁 Key Spec Files (Always Check Before Writing Code)
| เอกสาร | Path | ใช้เมื่อ | | เอกสาร | Path | ใช้เมื่อ |
| --------------- | ---------------------------------------------------------------- | ------------------- | |--------|------|--------|
| Schema Tables | `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` | ก่อนเขียน Query | | Schema Tables | `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` | ก่อนเขียน Query |
| Data Dictionary | `specs/03-Data-and-Storage/03-01-data-dictionary.md` | ตรวจ Business Rules | | Data Dictionary | `specs/03-Data-and-Storage/03-01-data-dictionary.md` | ตรวจ Business Rules |
| Edge Cases | `specs/01-Requirements/01-06-edge-cases-and-rules.md` | 37 Rules | | Edge Cases | `specs/01-Requirements/01-06-edge-cases-and-rules.md` | 37 Rules |
@@ -287,7 +282,7 @@ If you change your mind mid-project:
### ⚡ Project-Specific Workflow Cheatsheet ### ⚡ Project-Specific Workflow Cheatsheet
| Task | Workflow / Command | Notes | | Task | Workflow / Command | Notes |
| --------------------- | ------------------------- | --------------------------------- | |------|--------------------|-------|
| Create Backend Module | `/create-backend-module` | Scaffolds NestJS module | | Create Backend Module | `/create-backend-module` | Scaffolds NestJS module |
| Create Frontend Page | `/create-frontend-page` | Next.js App Router page | | Create Frontend Page | `/create-frontend-page` | Next.js App Router page |
| Schema Change | `/schema-change` | ADR-009: No migrations | | Schema Change | `/schema-change` | ADR-009: No migrations |
@@ -305,151 +300,4 @@ If you change your mind mid-project:
--- ---
## 🔧 Troubleshooting
### Common Issues & Solutions
#### **Version Inconsistency Errors**
**Problem**: Scripts report version mismatches between files.
**Solution**:
```bash
# Run version validation
./scripts/bash/validate-versions.sh
# Fix by updating all files to v1.8.6
# Then re-run validation to confirm
```
**Files to check**:
- `.agents/README.md`
- `.agents/skills/VERSION`
- `.agents/rules/00-project-context.md`
- `.agents/skills/skills.md`
#### **Missing Workflow Files**
**Problem**: Workflows not found in `.windsurf/workflows/`.
**Solution**:
```bash
# Sync workflow check
./scripts/bash/sync-workflows.sh
# Verify all 23 expected workflows are present
# Create missing ones from templates if needed
```
#### **Skill Health Issues**
**Problem**: Skills missing SKILL.md or required sections.
**Solution**:
```bash
# Run comprehensive skill audit
./scripts/bash/audit-skills.sh
# Check specific skill issues
# Missing files will be listed with specific errors
```
**Required SKILL.md sections**:
- Front matter: `name`, `description`, `version`
- Content: `## Role`, `## Task`
#### **Script Permission Issues**
**Problem**: Bash scripts not executable.
**Solution**:
```bash
# Make scripts executable
chmod +x .agents/scripts/bash/*.sh
# Verify with
ls -la .agents/scripts/bash/
```
#### **PowerShell Execution Policy**
**Problem**: PowerShell scripts blocked by execution policy.
**Solution**:
```powershell
# Check current policy
Get-ExecutionPolicy
# Allow scripts for current user
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
# Or run bypass for single script
PowerShell -ExecutionPolicy Bypass -File .agents/scripts/powershell/audit-skills.ps1
```
### Debug Mode
**Enable verbose output**:
```bash
# Run scripts with debug info
bash -x .agents/scripts/bash/audit-skills.sh
# PowerShell with verbose output
$VerbosePreference = "Continue"
. .agents/scripts/powershell/audit-skills.ps1
```
### Health Check Commands
**Quick health assessment**:
```bash
# 1. Check versions
./scripts/bash/validate-versions.sh
# 2. Audit skills
./scripts/bash/audit-skills.sh
# 3. Sync workflows
./scripts/bash/sync-workflows.sh
# 4. Check directory structure
find .agents -type f -name "*.md" | wc -l
find .windsurf/workflows -name "*.md" | wc -l
```
**PowerShell equivalent**:
```powershell
# 1. Check versions
. .agents/scripts/powershell/validate-versions.ps1
# 2. Audit skills
. .agents/scripts/powershell/audit-skills.ps1
# 3. Count files
(Get-ChildItem -Path .agents -Recurse -Filter "*.md").Count
(Get-ChildItem -Path .windsurf/workflows -Filter "*.md").Count
```
### Getting Help
**If issues persist**:
1. Check LCBP3 project version alignment
2. Verify `.specify/` directory structure (if using templates)
3. Ensure all dependencies are installed (bash, powershell core)
4. Review the specific error messages in script output
5. Check this README for workflow path updates (`.windsurf/workflows`)
---
_Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity._ _Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity._
-571
View File
@@ -1,571 +0,0 @@
#!/usr/bin/env node
/**
* advanced-validator.js - Advanced validation capabilities for .agents
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const yaml = require('js-yaml');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
// Advanced validation class
class AdvancedValidator {
constructor() {
this.validationResults = {
timestamp: new Date().toISOString(),
validations: {},
summary: {
total_validations: 0,
passed_validations: 0,
failed_validations: 0,
warnings: 0,
critical_issues: 0
}
};
this.criticalIssues = [];
}
log(message, level = 'info') {
const colors = {
info: '\x1b[36m', // Cyan
pass: '\x1b[32m', // Green
fail: '\x1b[31m', // Red
warn: '\x1b[33m', // Yellow
critical: '\x1b[35m', // Magenta
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}[${level.toUpperCase()}] ${message}${colors.reset}`);
}
validateSkillFrontMatter(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: 'SKILL.md file not found',
path: skillMdPath
});
return false;
}
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
if (!frontMatterMatch) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: 'No front matter found',
path: skillMdPath
});
return false;
}
try {
const frontMatter = yaml.load(frontMatterMatch[1]);
const requiredFields = ['name', 'description', 'version'];
const missingFields = requiredFields.filter(field => !frontMatter[field]);
if (missingFields.length > 0) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: `Missing required fields: ${missingFields.join(', ')}`,
missing_fields: missingFields,
front_matter: frontMatter,
path: skillMdPath
});
return false;
}
// Validate version format
const versionPattern = /^\d+\.\d+\.\d+$/;
if (!versionPattern.test(frontMatter.version)) {
this.addValidationResult(`skill_${skillName}_version_format`, 'warn', {
message: 'Version format should be X.Y.Z',
version: frontMatter.version,
path: skillMdPath
});
}
// Validate dependencies if present
if (frontMatter['depends-on']) {
const dependencies = Array.isArray(frontMatter['depends-on'])
? frontMatter['depends-on']
: [frontMatter['depends-on']];
for (const dep of dependencies) {
const depPath = path.join(SKILLS_DIR, dep);
if (!fs.existsSync(depPath)) {
this.addValidationResult(`skill_${skillName}_dependency_${dep}`, 'critical', {
message: `Dependency not found: ${dep}`,
dependency: dep,
path: skillMdPath
});
}
}
}
this.addValidationResult(`skill_${skillName}_frontmatter`, 'pass', {
message: 'Front matter is valid',
front_matter: frontMatter,
path: skillMdPath
});
return true;
} catch (yamlError) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: `Invalid YAML in front matter: ${yamlError.message}`,
path: skillMdPath
});
return false;
}
} catch (error) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: `Error reading SKILL.md: ${error.message}`,
path: skillMdPath
});
return false;
}
}
validateSkillContent(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
return false;
}
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
// Check for required sections
const requiredSections = ['## Role', '## Task'];
const missingSections = requiredSections.filter(section => !content.includes(section));
if (missingSections.length > 0) {
this.addValidationResult(`skill_${skillName}_content`, 'fail', {
message: `Missing required sections: ${missingSections.join(', ')}`,
missing_sections: missingSections,
path: skillMdPath
});
return false;
}
// Check for forbidden patterns
const forbiddenPatterns = [
{ pattern: /TODO.*FIX/gi, message: 'TODO items should be resolved' },
{ pattern: /FIXME/gi, message: 'FIXME items should be addressed' },
{ pattern: /XXX/gi, message: 'XXX markers should be replaced' }
];
for (const { pattern, message } of forbiddenPatterns) {
if (pattern.test(content)) {
this.addValidationResult(`skill_${skillName}_forbidden_patterns`, 'warn', {
message: `${message} found in content`,
pattern: pattern.toString(),
path: skillMdPath
});
}
}
// Validate content length
const contentLength = content.length;
if (contentLength < 500) {
this.addValidationResult(`skill_${skillName}_content_length`, 'warn', {
message: 'Skill content seems too short',
length: contentLength,
path: skillMdPath
});
}
this.addValidationResult(`skill_${skillName}_content`, 'pass', {
message: 'Skill content is valid',
length: contentLength,
path: skillMdPath
});
return true;
} catch (error) {
this.addValidationResult(`skill_${skillName}_content`, 'fail', {
message: `Error validating content: ${error.message}`,
path: skillMdPath
});
return false;
}
}
validateWorkflowStructure(workflowPath, workflowName) {
if (!fs.existsSync(workflowPath)) {
this.addValidationResult(`workflow_${workflowName}_exists`, 'fail', {
message: 'Workflow file not found',
path: workflowPath
});
return false;
}
try {
const content = fs.readFileSync(workflowPath, 'utf8');
// Check for markdown headers
if (!content.includes('#')) {
this.addValidationResult(`workflow_${workflowName}_structure`, 'fail', {
message: 'No markdown headers found',
path: workflowPath
});
return false;
}
// Check for workflow-specific patterns
const hasWorkflowContent = content.length > 200;
if (!hasWorkflowContent) {
this.addValidationResult(`workflow_${workflowName}_content`, 'warn', {
message: 'Workflow content seems too short',
length: content.length,
path: workflowPath
});
}
// Validate skill references
const skillReferences = content.match(/@speckit-\w+/g) || [];
for (const skillRef of skillReferences) {
const skillName = skillRef.replace('@', '');
const skillPath = path.join(SKILLS_DIR, skillName);
if (!fs.existsSync(skillPath)) {
this.addValidationResult(`workflow_${workflowName}_skill_ref_${skillName}`, 'critical', {
message: `Workflow references non-existent skill: ${skillRef}`,
skill_reference: skillRef,
path: workflowPath
});
}
}
this.addValidationResult(`workflow_${workflowName}_structure`, 'pass', {
message: 'Workflow structure is valid',
skill_references: skillReferences,
path: workflowPath
});
return true;
} catch (error) {
this.addValidationResult(`workflow_${workflowName}_structure`, 'fail', {
message: `Error validating workflow: ${error.message}`,
path: workflowPath
});
return false;
}
}
validateCrossReferences() {
this.log('Validating cross-references...', 'info');
// Check README.md references
const readmePath = path.join(AGENTS_DIR, 'README.md');
if (fs.existsSync(readmePath)) {
const readmeContent = fs.readFileSync(readmePath, 'utf8');
// Check if README references correct workflow path
if (readmeContent.includes('.agents/workflows') && !readmeContent.includes('.windsurf/workflows')) {
this.addValidationResult('readme_workflow_reference', 'critical', {
message: 'README.md references .agents/workflows instead of .windsurf/workflows',
path: readmePath
});
}
// Check version consistency in README
const versionMatches = readmeContent.match(/v?(\d+\.\d+\.\d+)/g) || [];
const uniqueVersions = [...new Set(versionMatches)];
if (uniqueVersions.length > 1) {
this.addValidationResult('readme_version_consistency', 'warn', {
message: 'Multiple versions found in README.md',
versions: uniqueVersions,
path: readmePath
});
}
}
// Check skills.md references
const skillsMdPath = path.join(SKILLS_DIR, 'skills.md');
if (fs.existsSync(skillsMdPath)) {
const skillsContent = fs.readFileSync(skillsMdPath, 'utf8');
// Validate skill dependency matrix
if (skillsContent.includes('## Skill Dependency Matrix')) {
this.addValidationResult('skills_dependency_matrix', 'pass', {
message: 'Skills documentation includes dependency matrix',
path: skillsMdPath
});
} else {
this.addValidationResult('skills_dependency_matrix', 'warn', {
message: 'Skills documentation missing dependency matrix',
path: skillsMdPath
});
}
}
}
validateSecurityCompliance() {
this.log('Validating security compliance...', 'info');
// Check for security patterns in rules
const securityRulePath = path.join(AGENTS_DIR, 'rules', '02-security.md');
if (fs.existsSync(securityRulePath)) {
const securityContent = fs.readFileSync(securityRulePath, 'utf8');
const requiredSecurityTopics = [
'authentication',
'authorization',
'rbac',
'validation',
'audit'
];
const missingTopics = requiredSecurityTopics.filter(topic =>
!securityContent.toLowerCase().includes(topic.toLowerCase())
);
if (missingTopics.length > 0) {
this.addValidationResult('security_rules_completeness', 'warn', {
message: `Security rules missing topics: ${missingTopics.join(', ')}`,
missing_topics: missingTopics,
path: securityRulePath
});
} else {
this.addValidationResult('security_rules_completeness', 'pass', {
message: 'Security rules cover all required topics',
path: securityRulePath
});
}
}
// Check for ADR-019 compliance in rules
const uuidRulePath = path.join(AGENTS_DIR, 'rules', '01-adr-019-uuid.md');
if (fs.existsSync(uuidRulePath)) {
const uuidContent = fs.readFileSync(uuidRulePath, 'utf8');
const criticalUuidRules = [
'parseInt',
'Number(',
'publicId',
'@Exclude()'
];
const missingRules = criticalUuidRules.filter(rule =>
!uuidContent.includes(rule)
);
if (missingRules.length > 0) {
this.addValidationResult('uuid_rules_completeness', 'critical', {
message: `UUID rules missing critical patterns: ${missingRules.join(', ')}`,
missing_patterns: missingRules,
path: uuidRulePath
});
} else {
this.addValidationResult('uuid_rules_completeness', 'pass', {
message: 'UUID rules cover all critical patterns',
path: uuidRulePath
});
}
}
}
validatePerformanceMetrics() {
this.log('Validating performance metrics...', 'info');
// Check file sizes
const criticalFiles = [
{ path: path.join(AGENTS_DIR, 'README.md'), name: 'README.md' },
{ path: path.join(SKILLS_DIR, 'skills.md'), name: 'skills.md' },
{ path: path.join(AGENTS_DIR, 'skills', 'VERSION'), name: 'VERSION' }
];
for (const file of criticalFiles) {
if (fs.existsSync(file.path)) {
const stats = fs.statSync(file.path);
const sizeKB = stats.size / 1024;
if (sizeKB > 100) {
this.addValidationResult(`file_size_${file.name}`, 'warn', {
message: `File ${file.name} is large (${sizeKB.toFixed(1)}KB)`,
size_kb: sizeKB,
path: file.path
});
} else {
this.addValidationResult(`file_size_${file.name}`, 'pass', {
message: `File ${file.name} size is acceptable`,
size_kb: sizeKB,
path: file.path
});
}
}
}
// Check directory structure depth
function getDirectoryDepth(dirPath, currentDepth = 0) {
let maxDepth = currentDepth;
if (fs.existsSync(dirPath)) {
const items = fs.readdirSync(dirPath);
for (const item of items) {
const itemPath = path.join(dirPath, item);
if (fs.statSync(itemPath).isDirectory()) {
const depth = getDirectoryDepth(itemPath, currentDepth + 1);
maxDepth = Math.max(maxDepth, depth);
}
}
}
return maxDepth;
}
const agentsDepth = getDirectoryDepth(AGENTS_DIR);
if (agentsDepth > 5) {
this.addValidationResult('directory_depth', 'warn', {
message: `.agents directory structure is deep (${agentsDepth} levels)`,
depth: agentsDepth,
path: AGENTS_DIR
});
} else {
this.addValidationResult('directory_depth', 'pass', {
message: `.agents directory structure depth is acceptable`,
depth: agentsDepth,
path: AGENTS_DIR
});
}
}
addValidationResult(name, status, details) {
this.validationResults.validations[name] = {
status,
timestamp: new Date().toISOString(),
...details
};
this.validationResults.summary.total_validations++;
switch (status) {
case 'pass':
this.validationResults.summary.passed_validations++;
this.log(`${name}: PASS - ${details.message}`, 'pass');
break;
case 'fail':
this.validationResults.summary.failed_validations++;
this.log(`${name}: FAIL - ${details.message}`, 'fail');
break;
case 'warn':
this.validationResults.summary.warnings++;
this.log(`${name}: WARN - ${details.message}`, 'warn');
break;
case 'critical':
this.validationResults.summary.critical_issues++;
this.criticalIssues.push({ name, ...details });
this.log(`${name}: CRITICAL - ${details.message}`, 'critical');
break;
}
}
async runAdvancedValidation() {
this.log('Starting advanced validation...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Validate all skills
this.log('Validating skills...', 'info');
if (fs.existsSync(SKILLS_DIR)) {
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
const itemPath = path.join(SKILLS_DIR, item);
return fs.statSync(itemPath).isDirectory();
});
for (const skillDir of skillDirs) {
const skillPath = path.join(SKILLS_DIR, skillDir);
this.validateSkillFrontMatter(skillPath, skillDir);
this.validateSkillContent(skillPath, skillDir);
}
}
// Validate all workflows
this.log('Validating workflows...', 'info');
if (fs.existsSync(WORKFLOWS_DIR)) {
const workflowFiles = fs.readdirSync(WORKFLOWS_DIR).filter(file => file.endsWith('.md'));
for (const workflowFile of workflowFiles) {
const workflowPath = path.join(WORKFLOWS_DIR, workflowFile);
const workflowName = workflowFile.replace('.md', '');
this.validateWorkflowStructure(workflowPath, workflowName);
}
}
// Cross-reference validation
this.validateCrossReferences();
// Security compliance validation
this.validateSecurityCompliance();
// Performance metrics validation
this.validatePerformanceMetrics();
// Generate summary
this.generateSummary();
return this.validationResults;
}
generateSummary() {
const { summary, critical_issues } = this.validationResults;
this.log('=== Advanced Validation Summary ===', 'info');
this.log(`Total validations: ${summary.total_validations}`, 'info');
this.log(`Passed: ${summary.passed_validations}`, 'pass');
this.log(`Failed: ${summary.failed_validations}`, summary.failed_validations > 0 ? 'fail' : 'info');
this.log(`Warnings: ${summary.warnings}`, 'warn');
this.log(`Critical issues: ${summary.critical_issues}`, 'critical');
if (critical_issues.length > 0) {
this.log('Critical Issues:', 'critical');
critical_issues.forEach(issue => {
this.log(` - ${issue.name}: ${issue.message}`, 'critical');
});
}
// Save validation results
const validationReportPath = path.join(AGENTS_DIR, 'reports', 'advanced-validation.json');
const reportsDir = path.dirname(validationReportPath);
if (!fs.existsSync(reportsDir)) {
fs.mkdirSync(reportsDir, { recursive: true });
}
fs.writeFileSync(validationReportPath, JSON.stringify(this.validationResults, null, 2));
this.log(`Advanced validation report saved to: ${validationReportPath}`, 'info');
}
}
// CLI interface
async function main() {
const validator = new AdvancedValidator();
try {
const results = await validator.runAdvancedValidation();
process.exit(results.summary.critical_issues > 0 ? 1 : 0);
} catch (error) {
console.error('Advanced validation failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { AdvancedValidator };
// Run if called directly
if (require.main === module) {
main();
}
-195
View File
@@ -1,195 +0,0 @@
#!/bin/bash
# audit-skills.sh - Verify skill completeness and health
# Part of LCBP3-DMS Phase 2 improvements
set -uo pipefail
# Note: no -e — we let per-skill checks accumulate issues without terminating
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
SKILLS_DIR="$AGENTS_DIR/skills"
echo "=== Skills Health Audit ==="
echo "Base directory: $BASE_DIR"
echo
# Function to check if skill has required files
check_skill_health() {
local skill_dir="$1"
local skill_name="$(basename "$skill_dir")"
local issues=0
# Check for SKILL.md
if [[ -f "$skill_dir/SKILL.md" ]]; then
echo -e "${GREEN} OK${NC}: $skill_name/SKILL.md"
else
echo -e "${RED} MISSING${NC}: $skill_name/SKILL.md"
((issues++))
fi
# Check for templates directory (optional)
if [[ -d "$skill_dir/templates" ]]; then
template_count=$(find "$skill_dir/templates" -name "*.md" -type f | wc -l)
if [[ $template_count -gt 0 ]]; then
echo -e "${GREEN} OK${NC}: $skill_name/templates ($template_count files)"
else
echo -e "${YELLOW} EMPTY${NC}: $skill_name/templates (no files)"
fi
fi
# Check SKILL.md content if exists
local skill_file="$skill_dir/SKILL.md"
if [[ -f "$skill_file" ]]; then
# Check for required front matter fields
local required_fields=("name" "description" "version")
for field in "${required_fields[@]}"; do
if grep -q "^$field:" "$skill_file"; then
echo -e " ${GREEN} FIELD${NC}: $field"
else
echo -e " ${RED} MISSING FIELD${NC}: $field"
((issues++)) || true
fi
done
# Check for LCBP3 context reference (speckit-* skills only)
if [[ "$skill_name" == speckit-* ]]; then
if grep -q '_LCBP3-CONTEXT\.md' "$skill_file"; then
echo -e " ${GREEN} CONTEXT${NC}: LCBP3 appendix referenced"
else
echo -e " ${YELLOW} MISSING${NC}: LCBP3 context reference"
((issues++)) || true
fi
fi
fi
return $issues
}
# Function to get skill version from SKILL.md
get_skill_version() {
local skill_file="$1"
if [[ -f "$skill_file" ]]; then
# Match 'version: X.Y.Z' (or quoted) at a LINE START only; ignore nested ` version:` fields.
# Output: bare X.Y.Z with no quotes/whitespace.
local raw
raw=$(grep -E "^version:[[:space:]]*['\"]?[0-9]+\.[0-9]+\.[0-9]+" "$skill_file" | head -1 || true)
if [[ -n "$raw" ]]; then
printf '%s' "$raw" | sed -E "s/^version:[[:space:]]*['\"]?([0-9]+\.[0-9]+\.[0-9]+).*/\1/"
else
echo "unknown"
fi
else
echo "no_file"
fi
}
# Check skills directory
if [[ ! -d "$SKILLS_DIR" ]]; then
echo -e "${RED}ERROR: Skills directory not found${NC}"
exit 1
fi
echo "Scanning skills directory: $SKILLS_DIR"
echo
# Get all skill directories
SKILL_DIRS=()
while IFS= read -r -d '' dir; do
SKILL_DIRS+=("$dir")
done < <(find "$SKILLS_DIR" -maxdepth 1 -type d -not -path "$SKILLS_DIR" -print0 | sort -z)
echo "Found ${#SKILL_DIRS[@]} skill directories"
echo
# Audit each skill
TOTAL_ISSUES=0
SKILL_SUMMARY=()
for skill_dir in "${SKILL_DIRS[@]}"; do
skill_name="$(basename "$skill_dir")"
# Skip non-skill entries (e.g. _LCBP3-CONTEXT.md would not match here; safe)
[[ "$skill_name" == _* ]] && continue
echo "Auditing: $skill_name"
echo "------------------------"
set +e
check_skill_health "$skill_dir"
issues=$?
set -u
skill_version=$(get_skill_version "$skill_dir/SKILL.md")
SKILL_SUMMARY+=("$skill_name:$issues:$skill_version")
TOTAL_ISSUES=$((TOTAL_ISSUES + issues))
echo
done
# Summary report
echo "=== Skills Audit Summary ==="
echo
echo "Skill Status:"
echo "-----------"
for summary in "${SKILL_SUMMARY[@]}"; do
IFS=':' read -r name issues version <<< "$summary"
if [[ $issues -eq 0 ]]; then
echo -e "${GREEN} HEALTHY${NC}: $name (v$version)"
else
echo -e "${RED} ISSUES${NC}: $name (v$version) - $issues issues"
fi
done
echo
# Check skills.md version consistency
SKILLS_VERSION_FILE="$SKILLS_DIR/VERSION"
if [[ -f "$SKILLS_VERSION_FILE" ]]; then
global_version=$(grep "^version:" "$SKILLS_VERSION_FILE" | sed 's/version: *//' | tr -d '\r\n ')
echo "Global skills version: v$global_version"
echo
# Check for version mismatches
echo "Version Consistency Check:"
echo "------------------------"
VERSION_MISMATCHES=0
for summary in "${SKILL_SUMMARY[@]}"; do
IFS=':' read -r name issues version <<< "$summary"
if [[ "$version" != "unknown" && "$version" != "no_file" && "$version" != "$global_version" ]]; then
echo -e "${YELLOW} MISMATCH${NC}: $name is v$version, global is v$global_version"
((VERSION_MISMATCHES++))
fi
done
if [[ $VERSION_MISMATCHES -eq 0 ]]; then
echo -e "${GREEN} All skills match global version${NC}"
fi
fi
echo
# Overall health
if [[ $TOTAL_ISSUES -eq 0 ]]; then
echo -e "${GREEN}=== SUCCESS: All skills healthy ===${NC}"
echo "Total skills: ${#SKILL_DIRS[@]}"
exit 0
else
echo -e "${RED}=== ISSUES FOUND: $TOTAL_ISSUES total issues ===${NC}"
echo
echo "Recommendations:"
echo "1. Fix missing SKILL.md files"
echo "2. Add required front matter fields"
echo "3. Ensure Role and Task sections exist"
echo "4. Align skill versions with global version"
exit 1
fi
-149
View File
@@ -1,149 +0,0 @@
#!/bin/bash
# sync-workflows.sh - Sync workflow references between .agents and .windsurf
# Part of LCBP3-DMS Phase 2 improvements
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
WINDSURF_DIR="$BASE_DIR/.windsurf"
WORKFLOWS_DIR="$WINDSURF_DIR/workflows"
echo "=== Workflow Synchronization Check ==="
echo "Base directory: $BASE_DIR"
echo
# Function to check if workflow exists
check_workflow() {
local workflow_name="$1"
local workflow_file="$WORKFLOWS_DIR/$workflow_name"
if [[ -f "$workflow_file" ]]; then
echo -e "${GREEN} EXISTS${NC}: $workflow_name"
return 0
else
echo -e "${RED} MISSING${NC}: $workflow_name"
return 1
fi
}
# Function to list all workflows
list_workflows() {
if [[ -d "$WORKFLOWS_DIR" ]]; then
find "$WORKFLOWS_DIR" -name "*.md" -type f | sort
else
echo "No workflows directory found"
fi
}
# Check directories
echo "Checking directory structure..."
if [[ -d "$AGENTS_DIR" ]]; then
echo -e "${GREEN} OK${NC}: .agents directory exists"
else
echo -e "${RED} ERROR${NC}: .agents directory not found"
exit 1
fi
if [[ -d "$WINDSURF_DIR" ]]; then
echo -e "${GREEN} OK${NC}: .windsurf directory exists"
else
echo -e "${RED} ERROR${NC}: .windsurf directory not found"
exit 1
fi
if [[ -d "$WORKFLOWS_DIR" ]]; then
echo -e "${GREEN} OK${NC}: workflows directory exists"
else
echo -e "${RED} ERROR${NC}: workflows directory not found"
exit 1
fi
echo
# Expected workflows based on README documentation
echo "Checking expected workflows..."
EXPECTED_WORKFLOWS=(
"00-speckit.all.md"
"01-speckit.constitution.md"
"02-speckit.specify.md"
"03-speckit.clarify.md"
"04-speckit.plan.md"
"05-speckit.tasks.md"
"06-speckit.analyze.md"
"07-speckit.implement.md"
"08-speckit.checker.md"
"09-speckit.tester.md"
"10-speckit.reviewer.md"
"11-speckit.validate.md"
"speckit.prepare.md"
"schema-change.md"
"create-backend-module.md"
"create-frontend-page.md"
"deploy.md"
"review.md"
"util-speckit.checklist.md"
"util-speckit.diff.md"
"util-speckit.migrate.md"
"util-speckit.quizme.md"
"util-speckit.status.md"
"util-speckit.taskstoissues.md"
)
MISSING_WORKFLOWS=0
for workflow in "${EXPECTED_WORKFLOWS[@]}"; do
if ! check_workflow "$workflow"; then
((MISSING_WORKFLOWS++))
fi
done
echo
# List all actual workflows
echo "All workflows in $WORKFLOWS_DIR:"
echo "--------------------------------"
while IFS= read -r workflow; do
echo " $(basename "$workflow")"
done < <(list_workflows)
echo
# Check for orphaned workflows (unexpected ones)
echo "Checking for unexpected workflows..."
ACTUAL_WORKFLOWS=()
while IFS= read -r workflow; do
ACTUAL_WORKFLOWS+=("$(basename "$workflow")")
done < <(list_workflows)
for actual_workflow in "${ACTUAL_WORKFLOWS[@]}"; do
if [[ ! " ${EXPECTED_WORKFLOWS[*]} " =~ " ${actual_workflow} " ]]; then
echo -e "${YELLOW} UNEXPECTED${NC}: $actual_workflow"
fi
done
echo
# Summary
if [[ $MISSING_WORKFLOWS -eq 0 ]]; then
echo -e "${GREEN}=== SUCCESS: All expected workflows present ===${NC}"
echo "Total workflows: ${#ACTUAL_WORKFLOWS[@]}"
exit 0
else
echo -e "${RED}=== FAILED: $MISSING_WORKFLOWS workflows missing ===${NC}"
echo
echo "To fix missing workflows:"
echo "1. Create missing workflow files in $WORKFLOWS_DIR"
echo "2. Use existing workflows as templates"
echo "3. Run this script again to verify"
exit 1
fi
-106
View File
@@ -1,106 +0,0 @@
#!/bin/bash
# validate-versions.sh - Check version consistency across .agents files
# Part of LCBP3-DMS Phase 2 improvements
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
# Expected version (should match LCBP3 version)
EXPECTED_VERSION="1.8.9"
echo "=== .agents Version Validation ==="
echo "Base directory: $BASE_DIR"
echo "Expected version: $EXPECTED_VERSION"
echo
# Function to extract version from file
extract_version() {
local file="$1"
local pattern="$2"
if [[ -f "$file" ]]; then
grep -o "$pattern" "$file" | head -1 | sed 's/.*\([0-9]\+\.[0-9]\+\.[0-9]\+\).*/\1/' || echo "NOT_FOUND"
else
echo "FILE_NOT_FOUND"
fi
}
# Files to check
declare -A FILES_TO_CHECK=(
["$AGENTS_DIR/skills/VERSION"]="version: \([0-9]\+\.[0-9]\+\.[0-9]\+\)"
["$AGENTS_DIR/skills/skills.md"]="[Vv]\([0-9]\+\.[0-9]\+\.[0-9]\+\)"
)
# Track issues
ISSUES=0
echo "Checking version consistency..."
echo
for file in "${!FILES_TO_CHECK[@]}"; do
pattern="${FILES_TO_CHECK[$file]}"
relative_path="${file#$BASE_DIR/}"
version=$(extract_version "$file" "$pattern")
if [[ "$version" == "NOT_FOUND" ]] || [[ "$version" == "FILE_NOT_FOUND" ]]; then
echo -e "${RED} ERROR${NC}: $relative_path - Version not found"
((ISSUES++))
elif [[ "$version" != "$EXPECTED_VERSION" ]]; then
echo -e "${RED} ERROR${NC}: $relative_path - Found v$version, expected v$EXPECTED_VERSION"
((ISSUES++))
else
echo -e "${GREEN} OK${NC}: $relative_path - v$version"
fi
done
echo
# Check for version mismatches in skill files
echo "Checking skill file versions..."
SKILL_VERSIONS_FILE="$AGENTS_DIR/skills/VERSION"
if [[ -f "$SKILL_VERSIONS_FILE" ]]; then
skills_version=$(extract_version "$SKILL_VERSIONS_FILE" "version: \([0-9]\+\.[0-9]\+\.[0-9]\+\)")
echo "Skills version file: v$skills_version"
fi
# Check workflow versions (in .windsurf/workflows)
WORKFLOWS_DIR="$BASE_DIR/.windsurf/workflows"
if [[ -d "$WORKFLOWS_DIR" ]]; then
echo "Checking workflow files..."
workflow_count=0
for workflow in "$WORKFLOWS_DIR"/*.md; do
if [[ -f "$workflow" ]]; then
workflow_count=$((workflow_count + 1))
fi
done
echo -e "${GREEN} OK${NC}: Found $workflow_count workflow files"
else
echo -e "${YELLOW} WARNING${NC}: Workflows directory not found at $WORKFLOWS_DIR"
fi
echo
# Summary
if [[ $ISSUES -eq 0 ]]; then
echo -e "${GREEN}=== SUCCESS: All versions consistent ===${NC}"
exit 0
else
echo -e "${RED}=== FAILED: $ISSUES version issues found ===${NC}"
echo
echo "To fix version issues:"
echo "1. Update files to use v$EXPECTED_VERSION"
echo "2. Ensure LCBP3 project version matches"
echo "3. Run this script again to verify"
exit 1
fi
-516
View File
@@ -1,516 +0,0 @@
# ci-hooks.ps1 - Continuous integration hooks for .agents (PowerShell version)
# Part of LCBP3-DMS Phase 3 enhancements
param(
[Parameter(Mandatory=$false)]
[ValidateSet("pre-commit", "pre-push", "ci-pipeline", "install-hooks", "help")]
[string]$Command = "help"
)
# Configuration
$BaseDir = Split-Path -Parent (Split-Path -Parent $PSScriptRoot)
$AgentsDir = Join-Path $BaseDir ".agents"
$CILogDir = Join-Path $AgentsDir "logs\ci"
$CIReportDir = Join-Path $AgentsDir "reports\ci"
# Ensure directories exist
if (-not (Test-Path $CILogDir)) { New-Item -ItemType Directory -Path $CILogDir -Force | Out-Null }
if (-not (Test-Path $CIReportDir)) { New-Item -ItemType Directory -Path $CIReportDir -Force | Out-Null }
# Colors for output
$Colors = @{
Red = "`e[0;31m"
Green = "`e[0;32m"
Yellow = "`e[1;33m"
Blue = "`e[0;34m"
NoColor = "`e[0m"
}
# Logging function
function Write-CILog {
param(
[string]$Level,
[string]$Message
)
$timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
$logFile = Join-Path $CILogDir "ci-$(Get-Date -Format 'yyyy-MM-dd').log"
"$timestamp [$Level] $Message" | Out-File -FilePath $logFile -Append
# Console output with colors
switch ($Level) {
"INFO" { Write-Host $Message -ForegroundColor $Colors.Blue }
"PASS" { Write-Host $Message -ForegroundColor $Colors.Green }
"WARN" { Write-Host $Message -ForegroundColor $Colors.Yellow }
"FAIL" { Write-Host $Message -ForegroundColor $Colors.Red }
default { Write-Host $Message }
}
}
# Pre-commit hook
function Invoke-PreCommitHook {
Write-CILog "INFO" "Running pre-commit validation..."
$exitCode = 0
# 1. Run version validation
Write-CILog "INFO" "Checking version consistency..."
$versionScript = Join-Path $AgentsDir "scripts\powershell\validate-versions.ps1"
if (Test-Path $versionScript) {
try {
& $versionScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-versions.log") -Append
Write-CILog "PASS" "Version validation passed"
} catch {
Write-CILog "FAIL" "Version validation failed"
$exitCode = 1
}
} else {
Write-CILog "WARN" "Version validation script not found"
}
# 2. Run skill audit
Write-CILog "INFO" "Auditing skills..."
$auditScript = Join-Path $AgentsDir "scripts\powershell\audit-skills.ps1"
if (Test-Path $auditScript) {
try {
& $auditScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-skills.log") -Append
Write-CILog "PASS" "Skill audit passed"
} catch {
Write-CILog "FAIL" "Skill audit failed"
$exitCode = 1
}
} else {
Write-CILog "WARN" "Skill audit script not found"
}
# 3. Run integration tests (if Node.js available)
if (Get-Command node -ErrorAction SilentlyContinue) {
Write-CILog "INFO" "Running integration tests..."
$testScript = Join-Path $AgentsDir "tests\skill-integration.test.js"
if (Test-Path $testScript) {
try {
node $testScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-tests.log") -Append
Write-CILog "PASS" "Integration tests passed"
} catch {
Write-CILog "WARN" "Integration tests failed (non-blocking)"
}
} else {
Write-CILog "WARN" "Integration test script not found"
}
} else {
Write-CILog "WARN" "Node.js not available, skipping integration tests"
}
# 4. Check for forbidden patterns
Write-CILog "INFO" "Checking for forbidden patterns..."
$forbiddenPatterns = @("TODO", "FIXME", "XXX", "HACK")
$foundForbidden = $false
foreach ($pattern in $forbiddenPatterns) {
$skillsDir = Join-Path $AgentsDir "skills"
if (Test-Path $skillsDir) {
$matches = Select-String -Path $skillsDir\*.md -Pattern $pattern -Recurse
if ($matches) {
Write-CILog "WARN" "Found forbidden pattern: $pattern"
$foundForbidden = $true
}
}
}
if (-not $foundForbidden) {
Write-CILog "PASS" "No forbidden patterns found"
}
# Generate pre-commit report
$reportFile = Join-Path $CIReportDir "pre-commit-$(Get-Date -Format 'yyyyMMdd-HHmmss').json"
$report = @{
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
hook_type = "pre-commit"
exit_code = $exitCode
checks_performed = @(
"version_validation",
"skill_audit",
"integration_tests",
"forbidden_patterns"
)
log_files = @(
"pre-commit-versions.log",
"pre-commit-skills.log",
"pre-commit-tests.log"
)
}
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
Write-CILog "INFO" "Pre-commit report saved to: $reportFile"
if ($exitCode -eq 0) {
Write-CILog "PASS" "Pre-commit validation completed successfully"
} else {
Write-CILog "FAIL" "Pre-commit validation failed"
}
return $exitCode
}
# Pre-push hook
function Invoke-PrePushHook {
Write-CILog "INFO" "Running pre-push validation..."
$exitCode = 0
# 1. Full health check
Write-CILog "INFO" "Running full health check..."
if (Get-Command node -ErrorAction SilentlyContinue) {
$healthScript = Join-Path $AgentsDir "scripts\health-monitor.js"
if (Test-Path $healthScript) {
try {
node $healthScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-health.log") -Append
Write-CILog "PASS" "Health check passed"
} catch {
Write-CILog "FAIL" "Health check failed"
$exitCode = 1
}
} else {
Write-CILog "WARN" "Health monitor script not found"
}
} else {
Write-CILog "WARN" "Node.js not available, using basic health check"
$auditScript = Join-Path $AgentsDir "scripts\powershell\audit-skills.ps1"
if (Test-Path $auditScript) {
try {
& $auditScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-basic.log") -Append
Write-CILog "PASS" "Basic health check passed"
} catch {
Write-CILog "FAIL" "Basic health check failed"
$exitCode = 1
}
}
}
# 2. Advanced validation (if available)
if (Get-Command node -ErrorAction SilentlyContinue) {
$advancedScript = Join-Path $AgentsDir "scripts\advanced-validator.js"
if (Test-Path $advancedScript) {
Write-CILog "INFO" "Running advanced validation..."
try {
node $advancedScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-advanced.log") -Append
Write-CILog "PASS" "Advanced validation passed"
} catch {
Write-CILog "WARN" "Advanced validation found issues (non-blocking)"
}
}
}
# 3. Dependency validation
if (Get-Command node -ErrorAction SilentlyContinue) {
$dependencyScript = Join-Path $AgentsDir "scripts\dependency-validator.js"
if (Test-Path $dependencyScript) {
Write-CILog "INFO" "Running dependency validation..."
try {
node $dependencyScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-dependencies.log") -Append
Write-CILog "PASS" "Dependency validation passed"
} catch {
Write-CILog "WARN" "Dependency validation found issues (non-blocking)"
}
}
}
# 4. Performance monitoring
if (Get-Command node -ErrorAction SilentlyContinue) {
$performanceScript = Join-Path $AgentsDir "scripts\performance-monitor.js"
if (Test-Path $performanceScript) {
Write-CILog "INFO" "Running performance monitoring..."
try {
node $performanceScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-performance.log") -Append
Write-CILog "PASS" "Performance monitoring passed"
} catch {
Write-CILog "WARN" "Performance monitoring found issues (non-blocking)"
}
}
}
# Generate pre-push report
$reportFile = Join-Path $CIReportDir "pre-push-$(Get-Date -Format 'yyyyMMdd-HHmmss').json"
$report = @{
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
hook_type = "pre-push"
exit_code = $exitCode
checks_performed = @(
"health_check",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
)
log_files = @(
"pre-push-health.log",
"pre-push-advanced.log",
"pre-push-dependencies.log",
"pre-push-performance.log"
)
}
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
Write-CILog "INFO" "Pre-push report saved to: $reportFile"
if ($exitCode -eq 0) {
Write-CILog "PASS" "Pre-push validation completed successfully"
} else {
Write-CILog "FAIL" "Pre-push validation failed"
}
return $exitCode
}
# CI pipeline hook
function Invoke-CIPipelineHook {
Write-CILog "INFO" "Running CI pipeline validation..."
$exitCode = 0
$pipelineStart = Get-Date
# Create pipeline workspace
$workspace = Join-Path $CIReportDir "pipeline-$(Get-Date -Format 'yyyyMMdd-HHmmss')"
New-Item -ItemType Directory -Path $workspace -Force | Out-Null
# 1. Environment validation
Write-CILog "INFO" "Validating CI environment..."
# Check required tools
$requiredTools = @("node", "npm")
foreach ($tool in $requiredTools) {
if (Get-Command $tool -ErrorAction SilentlyContinue) {
Write-CILog "PASS" "Tool available: $tool"
} else {
Write-CILog "FAIL" "Tool missing: $tool"
$exitCode = 1
}
}
# Check Node.js modules
$packageJson = Join-Path $AgentsDir "package.json"
if (Test-Path $packageJson) {
Push-Location $AgentsDir
try {
npm list --depth=0 | Out-Null
Write-CILog "PASS" "Node.js dependencies installed"
} catch {
Write-CILog "WARN" "Installing Node.js dependencies..."
npm install | Out-File -FilePath (Join-Path $workspace "npm-install.log")
if ($LASTEXITCODE -ne 0) {
Write-CILog "FAIL" "Failed to install Node.js dependencies"
$exitCode = 1
}
}
Pop-Location
}
# 2. Full test suite
Write-CILog "INFO" "Running full test suite..."
# Integration tests
$integrationTest = Join-Path $AgentsDir "tests\skill-integration.test.js"
if (Test-Path $integrationTest) {
try {
node $integrationTest | Out-File -FilePath (Join-Path $workspace "integration-tests.log")
Write-CILog "PASS" "Integration tests passed"
} catch {
Write-CILog "FAIL" "Integration tests failed"
$exitCode = 1
}
}
# Workflow validation tests
$workflowTest = Join-Path $AgentsDir "tests\workflow-validation.test.js"
if (Test-Path $workflowTest) {
try {
node $workflowTest | Out-File -FilePath (Join-Path $workspace "workflow-tests.log")
Write-CILog "PASS" "Workflow validation tests passed"
} catch {
Write-CILog "FAIL" "Workflow validation tests failed"
$exitCode = 1
}
}
# 3. Comprehensive validation
Write-CILog "INFO" "Running comprehensive validation..."
# Health monitoring
$healthScript = Join-Path $AgentsDir "scripts\health-monitor.js"
if (Test-Path $healthScript) {
try {
node $healthScript | Out-File -FilePath (Join-Path $workspace "health-check.log")
Write-CILog "PASS" "Health monitoring passed"
} catch {
Write-CILog "FAIL" "Health monitoring failed"
$exitCode = 1
}
}
# Advanced validation
$advancedScript = Join-Path $AgentsDir "scripts\advanced-validator.js"
if (Test-Path $advancedScript) {
try {
node $advancedScript | Out-File -FilePath (Join-Path $workspace "advanced-validation.log")
Write-CILog "PASS" "Advanced validation passed"
} catch {
Write-CILog "WARN" "Advanced validation found issues"
}
}
# Dependency validation
$dependencyScript = Join-Path $AgentsDir "scripts\dependency-validator.js"
if (Test-Path $dependencyScript) {
try {
node $dependencyScript | Out-File -FilePath (Join-Path $workspace "dependency-validation.log")
Write-CILog "PASS" "Dependency validation passed"
} catch {
Write-CILog "WARN" "Dependency validation found issues"
}
}
# Performance monitoring
$performanceScript = Join-Path $AgentsDir "scripts\performance-monitor.js"
if (Test-Path $performanceScript) {
try {
node $performanceScript | Out-File -FilePath (Join-Path $workspace "performance-monitor.log")
Write-CILog "PASS" "Performance monitoring passed"
} catch {
Write-CILog "WARN" "Performance monitoring found issues"
}
}
# 4. Generate artifacts
Write-CILog "INFO" "Generating CI artifacts..."
$pipelineEnd = Get-Date
$duration = ($pipelineEnd - $pipelineStart).TotalSeconds
# Consolidated report
$reportFile = Join-Path $workspace "ci-pipeline-report.json"
$report = @{
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
pipeline_type = "full_ci"
duration_seconds = [int]$duration
exit_code = $exitCode
environment = @{
node_version = (node --version)
platform = $env:OS
working_directory = $BaseDir
}
checks_performed = @(
"environment_validation",
"integration_tests",
"workflow_validation_tests",
"health_monitoring",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
)
artifacts = @(
"integration-tests.log",
"workflow-tests.log",
"health-check.log",
"advanced-validation.log",
"dependency-validation.log",
"performance-monitor.log",
"npm-install.log"
)
workspace = $workspace
}
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
Write-CILog "INFO" "CI pipeline report saved to: $reportFile"
Write-CILog "INFO" "CI artifacts saved to: $workspace"
Write-CILog "INFO" "Pipeline duration: $([int]$duration)s"
if ($exitCode -eq 0) {
Write-CILog "PASS" "CI pipeline completed successfully"
} else {
Write-CILog "FAIL" "CI pipeline failed"
}
return $exitCode
}
# Install Git hooks
function Install-GitHooks {
Write-CILog "INFO" "Installing Git hooks..."
$hooksDir = Join-Path $BaseDir ".git\hooks"
$agentsHooksDir = Join-Path $AgentsDir "scripts\git-hooks"
# Create git-hooks directory
if (-not (Test-Path $agentsHooksDir)) {
New-Item -ItemType Directory -Path $agentsHooksDir -Force | Out-Null
}
# Create pre-commit hook
$preCommitContent = @'
#!/bin/bash
# Pre-commit hook for .agents validation
echo "Running .agents pre-commit validation..."
if bash .agents/scripts/ci-hooks.sh pre-commit; then
echo "Pre-commit validation passed"
exit 0
else
echo "Pre-commit validation failed"
exit 1
fi
'@
$preCommitContent | Out-File -FilePath (Join-Path $agentsHooksDir "pre-commit") -Encoding UTF8
# Create pre-push hook
$prePushContent = @'
#!/bin/bash
# Pre-push hook for .agents validation
echo "Running .agents pre-push validation..."
if bash .agents/scripts/ci-hooks.sh pre-push; then
echo "Pre-push validation passed"
exit 0
else
echo "Pre-push validation failed"
exit 1
fi
'@
$prePushContent | Out-File -FilePath (Join-Path $agentsHooksDir "pre-push") -Encoding UTF8
# Install hooks if .git directory exists
if (Test-Path $hooksDir) {
Copy-Item (Join-Path $agentsHooksDir "pre-commit") $hooksDir -Force
Copy-Item (Join-Path $agentsHooksDir "pre-push") $hooksDir -Force
Write-CILog "PASS" "Git hooks installed successfully"
} else {
Write-CILog "WARN" "Git repository not found, hooks copied to .agents\scripts\git-hooks"
}
}
# Main execution
switch ($Command) {
"pre-commit" {
exit (Invoke-PreCommitHook)
}
"pre-push" {
exit (Invoke-PrePushHook)
}
"ci-pipeline" {
exit (Invoke-CIPipelineHook)
}
"install-hooks" {
Install-GitHooks
}
"help" {
Write-Host "Usage: .\ci-hooks.ps1 -Command {pre-commit|pre-push|ci-pipeline|install-hooks|help}"
Write-Host ""
Write-Host "Commands:"
Write-Host " pre-commit - Run pre-commit validation"
Write-Host " pre-push - Run pre-push validation"
Write-Host " ci-pipeline - Run full CI pipeline"
Write-Host " install-hooks - Install Git hooks"
Write-Host " help - Show this help"
}
default {
Write-Host "Unknown command: $Command"
Write-Host "Use 'help' to see available commands"
exit 1
}
}
-445
View File
@@ -1,445 +0,0 @@
#!/bin/bash
# ci-hooks.sh - Continuous integration hooks for .agents
# Part of LCBP3-DMS Phase 3 enhancements
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
# CI configuration
CI_LOG_DIR="$AGENTS_DIR/logs/ci"
CI_REPORT_DIR="$AGENTS_DIR/reports/ci"
# Ensure directories exist
mkdir -p "$CI_LOG_DIR" "$CI_REPORT_DIR"
# Logging function
ci_log() {
local level="$1"
local message="$2"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
local log_file="$CI_LOG_DIR/ci-$(date '+%Y-%m-%d').log"
echo "[$timestamp] [$level] $message" | tee -a "$log_file"
# Console output with colors
case "$level" in
"INFO") echo -e "${BLUE}$message${NC}" ;;
"PASS") echo -e "${GREEN}$message${NC}" ;;
"WARN") echo -e "${YELLOW}$message${NC}" ;;
"FAIL") echo -e "${RED}$message${NC}" ;;
*) echo "$message" ;;
esac
}
# Pre-commit hook
pre_commit_hook() {
ci_log "INFO" "Running pre-commit validation..."
local exit_code=0
# 1. Run version validation
ci_log "INFO" "Checking version consistency..."
if "$AGENTS_DIR/scripts/bash/validate-versions.sh" >> "$CI_LOG_DIR/pre-commit-versions.log" 2>&1; then
ci_log "PASS" "Version validation passed"
else
ci_log "FAIL" "Version validation failed"
exit_code=1
fi
# 2. Run skill audit
ci_log "INFO" "Auditing skills..."
if "$AGENTS_DIR/scripts/bash/audit-skills.sh" >> "$CI_LOG_DIR/pre-commit-skills.log" 2>&1; then
ci_log "PASS" "Skill audit passed"
else
ci_log "FAIL" "Skill audit failed"
exit_code=1
fi
# 3. Run integration tests (if Node.js available)
if command -v node >/dev/null 2>&1; then
ci_log "INFO" "Running integration tests..."
if node "$AGENTS_DIR/tests/skill-integration.test.js" >> "$CI_LOG_DIR/pre-commit-tests.log" 2>&1; then
ci_log "PASS" "Integration tests passed"
else
ci_log "WARN" "Integration tests failed (non-blocking)"
fi
else
ci_log "WARN" "Node.js not available, skipping integration tests"
fi
# 4. Check for forbidden patterns
ci_log "INFO" "Checking for forbidden patterns..."
local forbidden_patterns=("TODO" "FIXME" "XXX" "HACK")
local found_forbidden=false
for pattern in "${forbidden_patterns[@]}"; do
if grep -r "$pattern" "$AGENTS_DIR/skills" --include="*.md" >/dev/null 2>&1; then
ci_log "WARN" "Found forbidden pattern: $pattern"
found_forbidden=true
fi
done
if [ "$found_forbidden" = false ]; then
ci_log "PASS" "No forbidden patterns found"
fi
# Generate pre-commit report
local report_file="$CI_REPORT_DIR/pre-commit-$(date '+%Y%m%d-%H%M%S').json"
cat > "$report_file" << EOF
{
"timestamp": "$(date -Iseconds)",
"hook_type": "pre-commit",
"exit_code": $exit_code,
"checks_performed": [
"version_validation",
"skill_audit",
"integration_tests",
"forbidden_patterns"
],
"log_files": [
"pre-commit-versions.log",
"pre-commit-skills.log",
"pre-commit-tests.log"
]
}
EOF
ci_log "INFO" "Pre-commit report saved to: $report_file"
if [ $exit_code -eq 0 ]; then
ci_log "PASS" "Pre-commit validation completed successfully"
else
ci_log "FAIL" "Pre-commit validation failed"
fi
return $exit_code
}
# Pre-push hook
pre_push_hook() {
ci_log "INFO" "Running pre-push validation..."
local exit_code=0
# 1. Full health check
ci_log "INFO" "Running full health check..."
if command -v node >/dev/null 2>&1; then
if node "$AGENTS_DIR/scripts/health-monitor.js" >> "$CI_LOG_DIR/pre-push-health.log" 2>&1; then
ci_log "PASS" "Health check passed"
else
ci_log "FAIL" "Health check failed"
exit_code=1
fi
else
ci_log "WARN" "Node.js not available, using basic health check"
if "$AGENTS_DIR/scripts/bash/audit-skills.sh" >> "$CI_LOG_DIR/pre-push-basic.log" 2>&1; then
ci_log "PASS" "Basic health check passed"
else
ci_log "FAIL" "Basic health check failed"
exit_code=1
fi
fi
# 2. Advanced validation (if available)
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/advanced-validator.js" ]; then
ci_log "INFO" "Running advanced validation..."
if node "$AGENTS_DIR/scripts/advanced-validator.js" >> "$CI_LOG_DIR/pre-push-advanced.log" 2>&1; then
ci_log "PASS" "Advanced validation passed"
else
ci_log "WARN" "Advanced validation found issues (non-blocking)"
fi
fi
# 3. Dependency validation
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/dependency-validator.js" ]; then
ci_log "INFO" "Running dependency validation..."
if node "$AGENTS_DIR/scripts/dependency-validator.js" >> "$CI_LOG_DIR/pre-push-dependencies.log" 2>&1; then
ci_log "PASS" "Dependency validation passed"
else
ci_log "WARN" "Dependency validation found issues (non-blocking)"
fi
fi
# 4. Performance monitoring
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/performance-monitor.js" ]; then
ci_log "INFO" "Running performance monitoring..."
if node "$AGENTS_DIR/scripts/performance-monitor.js" >> "$CI_LOG_DIR/pre-push-performance.log" 2>&1; then
ci_log "PASS" "Performance monitoring passed"
else
ci_log "WARN" "Performance monitoring found issues (non-blocking)"
fi
fi
# Generate pre-push report
local report_file="$CI_REPORT_DIR/pre-push-$(date '+%Y%m%d-%H%M%S').json"
cat > "$report_file" << EOF
{
"timestamp": "$(date -Iseconds)",
"hook_type": "pre-push",
"exit_code": $exit_code,
"checks_performed": [
"health_check",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
],
"log_files": [
"pre-push-health.log",
"pre-push-advanced.log",
"pre-push-dependencies.log",
"pre-push-performance.log"
]
}
EOF
ci_log "INFO" "Pre-push report saved to: $report_file"
if [ $exit_code -eq 0 ]; then
ci_log "PASS" "Pre-push validation completed successfully"
else
ci_log "FAIL" "Pre-push validation failed"
fi
return $exit_code
}
# CI pipeline hook
ci_pipeline_hook() {
ci_log "INFO" "Running CI pipeline validation..."
local exit_code=0
local pipeline_start=$(date +%s)
# Create pipeline workspace
local workspace="$CI_REPORT_DIR/pipeline-$(date '+%Y%m%d-%H%M%S')"
mkdir -p "$workspace"
# 1. Environment validation
ci_log "INFO" "Validating CI environment..."
# Check required tools
local required_tools=("node" "npm")
for tool in "${required_tools[@]}"; do
if command -v "$tool" >/dev/null 2>&1; then
ci_log "PASS" "Tool available: $tool"
else
ci_log "FAIL" "Tool missing: $tool"
exit_code=1
fi
done
# Check Node.js modules
if [ -f "$AGENTS_DIR/package.json" ]; then
cd "$AGENTS_DIR"
if npm list --depth=0 >/dev/null 2>&1; then
ci_log "PASS" "Node.js dependencies installed"
else
ci_log "WARN" "Installing Node.js dependencies..."
npm install >> "$workspace/npm-install.log" 2>&1 || {
ci_log "FAIL" "Failed to install Node.js dependencies"
exit_code=1
}
fi
cd "$BASE_DIR"
fi
# 2. Full test suite
ci_log "INFO" "Running full test suite..."
# Integration tests
if node "$AGENTS_DIR/tests/skill-integration.test.js" >> "$workspace/integration-tests.log" 2>&1; then
ci_log "PASS" "Integration tests passed"
else
ci_log "FAIL" "Integration tests failed"
exit_code=1
fi
# Workflow validation tests
if node "$AGENTS_DIR/tests/workflow-validation.test.js" >> "$workspace/workflow-tests.log" 2>&1; then
ci_log "PASS" "Workflow validation tests passed"
else
ci_log "FAIL" "Workflow validation tests failed"
exit_code=1
fi
# 3. Comprehensive validation
ci_log "INFO" "Running comprehensive validation..."
# Health monitoring
if node "$AGENTS_DIR/scripts/health-monitor.js" >> "$workspace/health-check.log" 2>&1; then
ci_log "PASS" "Health monitoring passed"
else
ci_log "FAIL" "Health monitoring failed"
exit_code=1
fi
# Advanced validation
if node "$AGENTS_DIR/scripts/advanced-validator.js" >> "$workspace/advanced-validation.log" 2>&1; then
ci_log "PASS" "Advanced validation passed"
else
ci_log "WARN" "Advanced validation found issues"
fi
# Dependency validation
if node "$AGENTS_DIR/scripts/dependency-validator.js" >> "$workspace/dependency-validation.log" 2>&1; then
ci_log "PASS" "Dependency validation passed"
else
ci_log "WARN" "Dependency validation found issues"
fi
# Performance monitoring
if node "$AGENTS_DIR/scripts/performance-monitor.js" >> "$workspace/performance-monitor.log" 2>&1; then
ci_log "PASS" "Performance monitoring passed"
else
ci_log "WARN" "Performance monitoring found issues"
fi
# 4. Generate artifacts
ci_log "INFO" "Generating CI artifacts..."
local pipeline_end=$(date +%s)
local duration=$((pipeline_end - pipeline_start))
# Consolidated report
local report_file="$workspace/ci-pipeline-report.json"
cat > "$report_file" << EOF
{
"timestamp": "$(date -Iseconds)",
"pipeline_type": "full_ci",
"duration_seconds": $duration,
"exit_code": $exit_code,
"environment": {
"node_version": "$(node --version)",
"platform": "$(uname -s)",
"working_directory": "$BASE_DIR"
},
"checks_performed": [
"environment_validation",
"integration_tests",
"workflow_validation_tests",
"health_monitoring",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
],
"artifacts": [
"integration-tests.log",
"workflow-tests.log",
"health-check.log",
"advanced-validation.log",
"dependency-validation.log",
"performance-monitor.log",
"npm-install.log"
],
"workspace": "$workspace"
}
EOF
ci_log "INFO" "CI pipeline report saved to: $report_file"
ci_log "INFO" "CI artifacts saved to: $workspace"
ci_log "INFO" "Pipeline duration: ${duration}s"
if [ $exit_code -eq 0 ]; then
ci_log "PASS" "CI pipeline completed successfully"
else
ci_log "FAIL" "CI pipeline failed"
fi
return $exit_code
}
# Install Git hooks
install_git_hooks() {
ci_log "INFO" "Installing Git hooks..."
local hooks_dir="$BASE_DIR/.git/hooks"
local agents_hooks_dir="$AGENTS_DIR/scripts/git-hooks"
# Create git-hooks directory
mkdir -p "$agents_hooks_dir"
# Create pre-commit hook
cat > "$agents_hooks_dir/pre-commit" << 'EOF'
#!/bin/bash
# Pre-commit hook for .agents validation
echo "Running .agents pre-commit validation..."
if bash .agents/scripts/ci-hooks.sh pre-commit; then
echo "Pre-commit validation passed"
exit 0
else
echo "Pre-commit validation failed"
exit 1
fi
EOF
# Create pre-push hook
cat > "$agents_hooks_dir/pre-push" << 'EOF'
#!/bin/bash
# Pre-push hook for .agents validation
echo "Running .agents pre-push validation..."
if bash .agents/scripts/ci-hooks.sh pre-push; then
echo "Pre-push validation passed"
exit 0
else
echo "Pre-push validation failed"
exit 1
fi
EOF
# Make hooks executable
chmod +x "$agents_hooks_dir/pre-commit"
chmod +x "$agents_hooks_dir/pre-push"
# Install hooks if .git directory exists
if [ -d "$hooks_dir" ]; then
cp "$agents_hooks_dir/pre-commit" "$hooks_dir/"
cp "$agents_hooks_dir/pre-push" "$hooks_dir/"
ci_log "PASS" "Git hooks installed successfully"
else
ci_log "WARN" "Git repository not found, hooks copied to .agents/scripts/git-hooks"
fi
}
# Main function
main() {
local command="${1:-help}"
case "$command" in
"pre-commit")
pre_commit_hook
;;
"pre-push")
pre_push_hook
;;
"ci-pipeline")
ci_pipeline_hook
;;
"install-hooks")
install_git_hooks
;;
"help"|*)
echo "Usage: $0 {pre-commit|pre-push|ci-pipeline|install-hooks|help}"
echo ""
echo "Commands:"
echo " pre-commit - Run pre-commit validation"
echo " pre-push - Run pre-push validation"
echo " ci-pipeline - Run full CI pipeline"
echo " install-hooks - Install Git hooks"
echo " help - Show this help"
;;
esac
}
# Run main function with all arguments
main "$@"
-457
View File
@@ -1,457 +0,0 @@
#!/usr/bin/env node
/**
* dependency-validator.js - Skill dependency validation system
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const yaml = require('js-yaml');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
// Dependency validation class
class DependencyValidator {
constructor() {
this.validationResults = {
timestamp: new Date().toISOString(),
dependency_graph: {},
circular_dependencies: [],
missing_dependencies: [],
orphaned_skills: [],
dependency_chains: {},
validation_summary: {
total_skills: 0,
skills_with_dependencies: 0,
circular_dependencies_found: 0,
missing_dependencies_found: 0,
orphaned_skills_found: 0,
max_dependency_depth: 0,
validation_status: 'unknown'
}
};
}
log(message, level = 'info') {
const colors = {
info: '\x1b[36m', // Cyan
pass: '\x1b[32m', // Green
fail: '\x1b[31m', // Red
warn: '\x1b[33m', // Yellow
critical: '\x1b[35m', // Magenta
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}[${level.toUpperCase()}] ${message}${colors.reset}`);
}
extractSkillDependencies(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
this.log(`No SKILL.md found for ${skillName}`, 'warn');
return { dependencies: [], handoffs: [], error: 'SKILL.md not found' };
}
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
// Extract dependencies from front matter
let dependencies = [];
let handoffs = [];
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
if (frontMatterMatch) {
try {
const frontMatter = yaml.load(frontMatterMatch[1]);
// Handle depends-on field
if (frontMatter['depends-on']) {
if (Array.isArray(frontMatter['depends-on'])) {
dependencies = frontMatter['depends-on'];
} else {
dependencies = [frontMatter['depends-on']];
}
}
// Handle handoffs field
if (frontMatter.handoffs && Array.isArray(frontMatter.handoffs)) {
handoffs = frontMatter.handoffs.map(h => h.agent);
}
} catch (yamlError) {
this.log(`Invalid YAML in ${skillName} front matter: ${yamlError.message}`, 'warn');
}
}
// Also extract skill references from content
const contentSkillRefs = content.match(/@speckit-\w+/g) || [];
const contentDependencies = contentSkillRefs.map(ref => ref.replace('@', ''));
// Merge dependencies (avoid duplicates)
const allDependencies = [...new Set([...dependencies, ...contentDependencies])];
return {
dependencies: allDependencies,
handoffs: handoffs,
content_references: contentSkillRefs,
front_matter_dependencies: dependencies,
error: null
};
} catch (error) {
this.log(`Error reading ${skillName}: ${error.message}`, 'warn');
return { dependencies: [], handoffs: [], error: error.message };
}
}
buildDependencyGraph() {
this.log('Building dependency graph...', 'info');
if (!fs.existsSync(SKILLS_DIR)) {
this.log('Skills directory not found', 'fail');
return;
}
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
const itemPath = path.join(SKILLS_DIR, item);
return fs.statSync(itemPath).isDirectory();
});
this.validationResults.validation_summary.total_skills = skillDirs.length;
// Extract dependencies for each skill
for (const skillDir of skillDirs) {
const skillPath = path.join(SKILLS_DIR, skillDir);
const dependencyInfo = this.extractSkillDependencies(skillPath, skillDir);
this.validationResults.dependency_graph[skillDir] = dependencyInfo;
if (dependencyInfo.dependencies.length > 0 || dependencyInfo.handoffs.length > 0) {
this.validationResults.validation_summary.skills_with_dependencies++;
}
}
this.log(`Analyzed ${skillDirs.length} skills`, 'info');
this.log(`Skills with dependencies: ${this.validationResults.validation_summary.skills_with_dependencies}`, 'info');
}
validateDependencies() {
this.log('Validating dependencies...', 'info');
const { dependency_graph } = this.validationResults;
const allSkills = Object.keys(dependency_graph);
// Check for missing dependencies
for (const [skillName, dependencyInfo] of Object.entries(dependency_graph)) {
for (const dependency of dependencyInfo.dependencies) {
if (!allSkills.includes(dependency)) {
this.validationResults.missing_dependencies.push({
skill: skillName,
missing_dependency: dependency,
dependency_type: 'depends-on'
});
this.validationResults.validation_summary.missing_dependencies_found++;
this.log(`Missing dependency: ${skillName} depends on ${dependency}`, 'fail');
}
}
for (const handoff of dependencyInfo.handoffs) {
if (!allSkills.includes(handoff)) {
this.validationResults.missing_dependencies.push({
skill: skillName,
missing_dependency: handoff,
dependency_type: 'handoff'
});
this.validationResults.validation_summary.missing_dependencies_found++;
this.log(`Missing handoff: ${skillName} hands off to ${handoff}`, 'fail');
}
}
}
// Check for orphaned skills (no one depends on them)
const dependedOnSkills = new Set();
for (const dependencyInfo of Object.values(dependency_graph)) {
dependencyInfo.dependencies.forEach(dep => dependedOnSkills.add(dep));
dependencyInfo.handoffs.forEach(handoff => dependedOnSkills.add(handoff));
}
for (const skill of allSkills) {
if (!dependedOnSkills.has(skill) && skill !== 'speckit-constitution') {
// Constitution is allowed to be orphaned (it's a starting point)
this.validationResults.orphaned_skills.push(skill);
this.validationResults.validation_summary.orphaned_skills_found++;
this.log(`Orphaned skill: ${skill} (no dependencies on it)`, 'warn');
}
}
}
detectCircularDependencies() {
this.log('Detecting circular dependencies...', 'info');
const { dependency_graph } = this.validationResults;
const visited = new Set();
const recursionStack = new Set();
const circularDeps = [];
function dfs(skillName, path = []) {
if (recursionStack.has(skillName)) {
// Found circular dependency
const cycleStart = path.indexOf(skillName);
const cycle = path.slice(cycleStart).concat(skillName);
circularDeps.push(cycle);
return;
}
if (visited.has(skillName)) {
return;
}
visited.add(skillName);
recursionStack.add(skillName);
path.push(skillName);
const dependencyInfo = dependency_graph[skillName];
if (dependencyInfo) {
for (const dependency of dependencyInfo.dependencies) {
dfs(dependency, [...path]);
}
}
recursionStack.delete(skillName);
}
// Run DFS from each skill
for (const skillName of Object.keys(dependency_graph)) {
if (!visited.has(skillName)) {
dfs(skillName);
}
}
this.validationResults.circular_dependencies = circularDeps;
this.validationResults.validation_summary.circular_dependencies_found = circularDeps.length;
if (circularDeps.length > 0) {
this.log(`Found ${circularDeps.length} circular dependencies:`, 'critical');
circularDeps.forEach((cycle, index) => {
this.log(` ${index + 1}. ${cycle.join(' -> ')}`, 'critical');
});
} else {
this.log('No circular dependencies found', 'pass');
}
}
calculateDependencyChains() {
this.log('Calculating dependency chains...', 'info');
const { dependency_graph } = this.validationResults;
const chains = {};
function calculateDepth(skillName, visited = new Set()) {
if (visited.has(skillName)) {
return 0; // Circular dependency protection
}
visited.add(skillName);
const dependencyInfo = dependency_graph[skillName];
if (!dependencyInfo || dependencyInfo.dependencies.length === 0) {
return 1;
}
let maxDepth = 0;
for (const dependency of dependencyInfo.dependencies) {
const depth = calculateDepth(dependency, new Set(visited));
maxDepth = Math.max(maxDepth, depth);
}
return maxDepth + 1;
}
function getDependencyChain(skillName) {
const dependencyInfo = dependency_graph[skillName];
if (!dependencyInfo || dependencyInfo.dependencies.length === 0) {
return [skillName];
}
const chains = [];
for (const dependency of dependencyInfo.dependencies) {
const depChain = getDependencyChain(dependency);
chains.push(depChain.concat(skillName));
}
// Return the longest chain
return chains.reduce((longest, current) =>
current.length > longest.length ? current : longest, [skillName]
);
}
for (const skillName of Object.keys(dependency_graph)) {
const depth = calculateDepth(skillName);
const chain = getDependencyChain(skillName);
chains[skillName] = {
depth: depth,
chain: chain,
chain_length: chain.length
};
}
this.validationResults.dependency_chains = chains;
const maxDepth = Math.max(...Object.values(chains).map(c => c.depth));
this.validationResults.validation_summary.max_dependency_depth = maxDepth;
this.log(`Maximum dependency depth: ${maxDepth}`, 'info');
}
validateWorkflowDependencies() {
this.log('Validating workflow dependencies...', 'info');
if (!fs.existsSync(WORKFLOWS_DIR)) {
this.log('Workflows directory not found', 'warn');
return;
}
const workflowFiles = fs.readdirSync(WORKFLOWS_DIR).filter(file => file.endsWith('.md'));
const allSkills = Object.keys(this.validationResults.dependency_graph);
for (const workflowFile of workflowFiles) {
const workflowPath = path.join(WORKFLOWS_DIR, workflowFile);
try {
const content = fs.readFileSync(workflowPath, 'utf8');
const skillReferences = content.match(/@speckit-\w+/g) || [];
for (const skillRef of skillReferences) {
const skillName = skillRef.replace('@', '');
if (!allSkills.includes(skillName)) {
this.validationResults.missing_dependencies.push({
workflow: workflowFile,
missing_dependency: skillName,
dependency_type: 'workflow-reference'
});
this.validationResults.validation_summary.missing_dependencies_found++;
this.log(`Workflow ${workflowFile} references missing skill: ${skillRef}`, 'fail');
}
}
} catch (error) {
this.log(`Error reading workflow ${workflowFile}: ${error.message}`, 'warn');
}
}
}
generateDependencyReport() {
this.log('Generating dependency report...', 'info');
// Determine overall validation status
const summary = this.validationResults.validation_summary;
if (summary.circular_dependencies_found > 0) {
summary.validation_status = 'critical';
} else if (summary.missing_dependencies_found > 0) {
summary.validation_status = 'failed';
} else if (summary.orphaned_skills_found > 0) {
summary.validation_status = 'warning';
} else {
summary.validation_status = 'passed';
}
// Save report
const reportPath = path.join(AGENTS_DIR, 'reports', 'dependency-validation.json');
const reportsDir = path.dirname(reportPath);
if (!fs.existsSync(reportsDir)) {
fs.mkdirSync(reportsDir, { recursive: true });
}
fs.writeFileSync(reportPath, JSON.stringify(this.validationResults, null, 2));
this.log(`Dependency validation report saved to: ${reportPath}`, 'info');
}
printSummary() {
const summary = this.validationResults.validation_summary;
this.log('=== Dependency Validation Summary ===', 'info');
this.log(`Total skills: ${summary.total_skills}`, 'info');
this.log(`Skills with dependencies: ${summary.skills_with_dependencies}`, 'info');
this.log(`Circular dependencies: ${summary.circular_dependencies_found}`, summary.circular_dependencies_found > 0 ? 'critical' : 'pass');
this.log(`Missing dependencies: ${summary.missing_dependencies_found}`, summary.missing_dependencies_found > 0 ? 'fail' : 'pass');
this.log(`Orphaned skills: ${summary.orphaned_skills_found}`, summary.orphaned_skills_found > 0 ? 'warn' : 'info');
this.log(`Max dependency depth: ${summary.max_dependency_depth}`, 'info');
this.log(`Validation status: ${summary.validation_status.toUpperCase()}`,
summary.validation_status === 'passed' ? 'pass' :
summary.validation_status === 'warning' ? 'warn' : 'fail');
// Show longest dependency chains
const chains = this.validationResults.dependency_chains;
const sortedChains = Object.entries(chains)
.sort(([,a], [,b]) => b.depth - a.depth)
.slice(0, 3);
if (sortedChains.length > 0) {
this.log('Top 3 longest dependency chains:', 'info');
sortedChains.forEach(([skillName, chainInfo], index) => {
this.log(` ${index + 1}. ${chainInfo.chain.join(' -> ')} (depth: ${chainInfo.depth})`, 'info');
});
}
}
async runDependencyValidation() {
this.log('Starting dependency validation...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Build dependency graph
this.buildDependencyGraph();
// Validate dependencies
this.validateDependencies();
// Detect circular dependencies
this.detectCircularDependencies();
// Calculate dependency chains
this.calculateDependencyChains();
// Validate workflow dependencies
this.validateWorkflowDependencies();
// Generate report
this.generateDependencyReport();
// Print summary
this.printSummary();
return this.validationResults;
}
}
// CLI interface
async function main() {
const validator = new DependencyValidator();
try {
const results = await validator.runDependencyValidation();
const status = results.validation_summary.validation_status;
process.exit(status === 'passed' || status === 'warning' ? 0 : 1);
} catch (error) {
console.error('Dependency validation failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { DependencyValidator };
// Run if called directly
if (require.main === module) {
main();
}
-369
View File
@@ -1,369 +0,0 @@
#!/usr/bin/env node
/**
* health-monitor.js - Automated health monitoring system for .agents
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const { execSync } = require('child_process');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const HEALTH_LOG_PATH = path.join(AGENTS_DIR, 'logs', 'health.log');
const HEALTH_REPORT_PATH = path.join(AGENTS_DIR, 'reports', 'health-report.json');
// Ensure directories exist
[ path.dirname(HEALTH_LOG_PATH), path.dirname(HEALTH_REPORT_PATH) ].forEach(dir => {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
});
// Health monitoring class
class HealthMonitor {
constructor() {
this.startTime = new Date();
this.metrics = {
timestamp: this.startTime.toISOString(),
version: '1.8.6',
checks: {},
summary: {
total_checks: 0,
passed_checks: 0,
failed_checks: 0,
warnings: 0,
overall_health: 'unknown'
}
};
}
log(message, level = 'info') {
const timestamp = new Date().toISOString();
const logEntry = `[${timestamp}] [${level.toUpperCase()}] ${message}\n`;
// Console output with colors
const colors = {
info: '\x1b[36m', // Cyan
pass: '\x1b[32m', // Green
fail: '\x1b[31m', // Red
warn: '\x1b[33m', // Yellow
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}${logEntry.trim()}${colors.reset}`);
// File logging
fs.appendFileSync(HEALTH_LOG_PATH, logEntry);
}
checkDirectoryExists(dirPath, checkName) {
this.metrics.summary.total_checks++;
const exists = fs.existsSync(dirPath);
this.metrics.checks[checkName] = {
type: 'directory_exists',
status: exists ? 'pass' : 'fail',
path: dirPath,
message: exists ? 'Directory exists' : 'Directory missing'
};
if (exists) {
this.metrics.summary.passed_checks++;
this.log(`${checkName}: PASS - Directory exists`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`${checkName}: FAIL - Directory missing: ${dirPath}`, 'fail');
}
return exists;
}
checkFileExists(filePath, checkName) {
this.metrics.summary.total_checks++;
const exists = fs.existsSync(filePath);
this.metrics.checks[checkName] = {
type: 'file_exists',
status: exists ? 'pass' : 'fail',
path: filePath,
message: exists ? 'File exists' : 'File missing'
};
if (exists) {
this.metrics.summary.passed_checks++;
this.log(`${checkName}: PASS - File exists`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`${checkName}: FAIL - File missing: ${filePath}`, 'fail');
}
return exists;
}
checkFileVersion(filePath, expectedVersion, checkName) {
this.metrics.summary.total_checks++;
if (!fs.existsSync(filePath)) {
this.metrics.summary.failed_checks++;
this.metrics.checks[checkName] = {
type: 'version_check',
status: 'fail',
path: filePath,
message: 'File does not exist'
};
this.log(`${checkName}: FAIL - File not found: ${filePath}`, 'fail');
return false;
}
try {
const content = fs.readFileSync(filePath, 'utf8');
const versionMatch = content.match(/v?(\d+\.\d+\.\d+)/);
const actualVersion = versionMatch ? versionMatch[1] : 'not_found';
const versionMatches = actualVersion === expectedVersion;
this.metrics.checks[checkName] = {
type: 'version_check',
status: versionMatches ? 'pass' : 'fail',
path: filePath,
expected_version: expectedVersion,
actual_version: actualVersion,
message: versionMatches ? 'Version matches' : `Version mismatch (expected ${expectedVersion}, found ${actualVersion})`
};
if (versionMatches) {
this.metrics.summary.passed_checks++;
this.log(`${checkName}: PASS - Version ${actualVersion}`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`${checkName}: FAIL - Version mismatch (expected ${expectedVersion}, found ${actualVersion})`, 'fail');
}
return versionMatches;
} catch (error) {
this.metrics.summary.failed_checks++;
this.metrics.checks[checkName] = {
type: 'version_check',
status: 'fail',
path: filePath,
message: `Error reading file: ${error.message}`
};
this.log(`${checkName}: FAIL - Error reading file: ${error.message}`, 'fail');
return false;
}
}
checkSkillHealth() {
this.log('Checking skill health...', 'info');
const skillsDir = path.join(AGENTS_DIR, 'skills');
if (!fs.existsSync(skillsDir)) {
this.log('Skills directory not found', 'fail');
return;
}
const skillDirs = fs.readdirSync(skillsDir).filter(item => {
const itemPath = path.join(skillsDir, item);
return fs.statSync(itemPath).isDirectory();
});
this.metrics.checks['skill_count'] = {
type: 'skill_count',
status: skillDirs.length >= 20 ? 'pass' : 'warn',
count: skillDirs.length,
expected: 20,
message: `Found ${skillDirs.length} skills (expected at least 20)`
};
if (skillDirs.length >= 20) {
this.metrics.summary.passed_checks++;
this.log(`Skill count: PASS - Found ${skillDirs.length} skills`, 'pass');
} else {
this.metrics.summary.warnings++;
this.log(`Skill count: WARN - Only ${skillDirs.length} skills found (expected at least 20)`, 'warn');
}
// Check individual skills
let healthySkills = 0;
skillDirs.forEach(skillDir => {
const skillPath = path.join(skillsDir, skillDir);
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (fs.existsSync(skillMdPath)) {
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
const hasName = content.includes('name:');
const hasDescription = content.includes('description:');
const hasVersion = content.includes('version:');
const hasRole = content.includes('## Role');
const hasTask = content.includes('## Task');
const isHealthy = hasName && hasDescription && hasVersion && hasRole && hasTask;
if (isHealthy) healthySkills++;
this.metrics.checks[`skill_${skillDir}_health`] = {
type: 'skill_health',
status: isHealthy ? 'pass' : 'fail',
skill: skillDir,
has_name: hasName,
has_description: hasDescription,
has_version: hasVersion,
has_role: hasRole,
has_task: hasTask,
message: isHealthy ? 'Skill is healthy' : 'Skill has missing sections'
};
} catch (error) {
this.metrics.checks[`skill_${skillDir}_health`] = {
type: 'skill_health',
status: 'fail',
skill: skillDir,
message: `Error reading skill: ${error.message}`
};
}
}
});
this.metrics.summary.total_checks++;
if (healthySkills === skillDirs.length) {
this.metrics.summary.passed_checks++;
this.log(`Individual skills: PASS - All ${healthySkills} skills are healthy`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`Individual skills: FAIL - Only ${healthySkills}/${skillDirs.length} skills are healthy`, 'fail');
}
}
checkWorkflowHealth() {
this.log('Checking workflow health...', 'info');
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
if (!fs.existsSync(workflowsDir)) {
this.log('Workflows directory not found', 'fail');
return;
}
const workflowFiles = fs.readdirSync(workflowsDir).filter(file => file.endsWith('.md'));
this.metrics.checks['workflow_count'] = {
type: 'workflow_count',
status: workflowFiles.length >= 20 ? 'pass' : 'warn',
count: workflowFiles.length,
expected: 20,
message: `Found ${workflowFiles.length} workflows (expected at least 20)`
};
if (workflowFiles.length >= 20) {
this.metrics.summary.passed_checks++;
this.log(`Workflow count: PASS - Found ${workflowFiles.length} workflows`, 'pass');
} else {
this.metrics.summary.warnings++;
this.log(`Workflow count: WARN - Only ${workflowFiles.length} workflows found (expected at least 20)`, 'warn');
}
}
calculateOverallHealth() {
const { total_checks, passed_checks, failed_checks, warnings } = this.metrics.summary;
if (failed_checks === 0) {
this.metrics.summary.overall_health = warnings === 0 ? 'excellent' : 'good';
} else if (failed_checks <= total_checks * 0.1) {
this.metrics.summary.overall_health = 'fair';
} else {
this.metrics.summary.overall_health = 'poor';
}
this.log(`Overall health: ${this.metrics.summary.overall_health}`, 'info');
}
generateReport() {
const report = {
...this.metrics,
duration: new Date() - this.startTime,
environment: {
node_version: process.version,
platform: process.platform,
agents_dir: AGENTS_DIR
}
};
fs.writeFileSync(HEALTH_REPORT_PATH, JSON.stringify(report, null, 2));
this.log(`Health report saved to: ${HEALTH_REPORT_PATH}`, 'info');
return report;
}
async runFullHealthCheck() {
this.log('Starting comprehensive health check...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Core directory checks
this.checkDirectoryExists(AGENTS_DIR, 'agents_directory');
this.checkDirectoryExists(path.join(AGENTS_DIR, 'skills'), 'skills_directory');
this.checkDirectoryExists(path.join(AGENTS_DIR, 'scripts'), 'scripts_directory');
this.checkDirectoryExists(path.join(AGENTS_DIR, 'rules'), 'rules_directory');
this.checkDirectoryExists(path.join(BASE_DIR, '.windsurf', 'workflows'), 'workflows_directory');
// Core file checks
this.checkFileExists(path.join(AGENTS_DIR, 'README.md'), 'readme_file');
this.checkFileExists(path.join(AGENTS_DIR, 'skills', 'VERSION'), 'skills_version_file');
this.checkFileExists(path.join(AGENTS_DIR, 'skills', 'skills.md'), 'skills_documentation');
// Version consistency checks
this.checkFileVersion(path.join(AGENTS_DIR, 'README.md'), '1.8.6', 'readme_version');
this.checkFileVersion(path.join(AGENTS_DIR, 'skills', 'VERSION'), '1.8.6', 'skills_version_file_version');
this.checkFileVersion(path.join(AGENTS_DIR, 'skills', 'skills.md'), '1.8.6', 'skills_documentation_version');
this.checkFileVersion(path.join(AGENTS_DIR, 'rules', '00-project-context.md'), '1.8.6', 'project_context_version');
// Script availability checks
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'validate-versions.sh'), 'bash_version_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'audit-skills.sh'), 'bash_audit_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'sync-workflows.sh'), 'bash_sync_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'validate-versions.ps1'), 'powershell_version_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'audit-skills.ps1'), 'powershell_audit_script');
// Detailed health checks
this.checkSkillHealth();
this.checkWorkflowHealth();
// Calculate overall health
this.calculateOverallHealth();
// Generate report
const report = this.generateReport();
// Summary
this.log('=== Health Check Summary ===', 'info');
this.log(`Total checks: ${this.metrics.summary.total_checks}`, 'info');
this.log(`Passed: ${this.metrics.summary.passed_checks}`, 'pass');
this.log(`Failed: ${this.metrics.summary.failed_checks}`, this.metrics.summary.failed_checks > 0 ? 'fail' : 'info');
this.log(`Warnings: ${this.metrics.summary.warnings}`, 'warn');
this.log(`Overall health: ${this.metrics.summary.overall_health}`, 'info');
this.log(`Duration: ${new Date() - this.startTime}ms`, 'info');
return report;
}
}
// CLI interface
async function main() {
const monitor = new HealthMonitor();
try {
const report = await monitor.runFullHealthCheck();
process.exit(report.summary.failed_checks > 0 ? 1 : 0);
} catch (error) {
console.error('Health check failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { HealthMonitor };
// Run if called directly
if (require.main === module) {
main();
}
-494
View File
@@ -1,494 +0,0 @@
#!/usr/bin/env node
/**
* performance-monitor.js - Performance monitoring for .agents skills
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const { performance } = require('perf_hooks');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
const PERFORMANCE_LOG_PATH = path.join(AGENTS_DIR, 'logs', 'performance.log');
const PERFORMANCE_REPORT_PATH = path.join(AGENTS_DIR, 'reports', 'performance-report.json');
// Ensure directories exist
[ path.dirname(PERFORMANCE_LOG_PATH), path.dirname(PERFORMANCE_REPORT_PATH) ].forEach(dir => {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
});
// Performance monitoring class
class PerformanceMonitor {
constructor() {
this.startTime = performance.now();
this.metrics = {
timestamp: new Date().toISOString(),
duration: 0,
skill_metrics: {},
workflow_metrics: {},
system_metrics: {},
summary: {
total_skills_analyzed: 0,
total_workflows_analyzed: 0,
average_skill_size: 0,
average_workflow_size: 0,
performance_score: 0,
recommendations: []
}
};
}
log(message, level = 'info') {
const timestamp = new Date().toISOString();
const logEntry = `[${timestamp}] [${level.toUpperCase()}] ${message}\n`;
// Console output with colors
const colors = {
info: '\x1b[36m', // Cyan
good: '\x1b[32m', // Green
warn: '\x1b[33m', // Yellow
poor: '\x1b[31m', // Red
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}${logEntry.trim()}${colors.reset}`);
// File logging
fs.appendFileSync(PERFORMANCE_LOG_PATH, logEntry);
}
analyzeSkillPerformance(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
this.log(`Skipping ${skillName} - SKILL.md not found`, 'warn');
return null;
}
const startTime = performance.now();
try {
const stats = fs.statSync(skillMdPath);
const content = fs.readFileSync(skillMdPath, 'utf8');
// Basic metrics
const fileSizeKB = stats.size / 1024;
const lineCount = content.split('\n').length;
const wordCount = content.split(/\s+/).filter(word => word.length > 0).length;
const charCount = content.length;
// Content complexity metrics
const sectionCount = (content.match(/^#+\s/gm) || []).length;
const codeBlockCount = (content.match(/```[\s\S]*?```/g) || []).length;
const listCount = (content.match(/^[-*+]\s/gm) || []).length;
// Front matter analysis
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
const frontMatterSize = frontMatterMatch ? frontMatterMatch[1].length : 0;
const hasFrontMatter = frontMatterMatch !== null;
// Readability metrics
const sentences = content.split(/[.!?]+/).filter(s => s.trim().length > 0);
const avgWordsPerSentence = sentences.length > 0 ? wordCount / sentences.length : 0;
const avgCharsPerWord = wordCount > 0 ? charCount / wordCount : 0;
// Performance score calculation
let performanceScore = 100;
// Size penalties
if (fileSizeKB > 50) performanceScore -= 10;
if (fileSizeKB > 100) performanceScore -= 20;
// Content quality bonuses
if (hasFrontMatter) performanceScore += 5;
if (sectionCount >= 3) performanceScore += 5;
if (codeBlockCount > 0) performanceScore += 5;
// Readability penalties
if (avgWordsPerSentence > 25) performanceScore -= 5;
if (avgWordsPerSentence > 35) performanceScore -= 10;
const analysisTime = performance.now() - startTime;
const skillMetrics = {
skill_name: skillName,
file_path: skillMdPath,
file_size_kb: Math.round(fileSizeKB * 100) / 100,
line_count: lineCount,
word_count: wordCount,
char_count: charCount,
section_count: sectionCount,
code_block_count: codeBlockCount,
list_count: listCount,
front_matter_size: frontMatterSize,
has_front_matter: hasFrontMatter,
avg_words_per_sentence: Math.round(avgWordsPerSentence * 100) / 100,
avg_chars_per_word: Math.round(avgCharsPerWord * 100) / 100,
performance_score: Math.max(0, Math.min(100, performanceScore)),
analysis_time_ms: Math.round(analysisTime * 100) / 100,
last_modified: stats.mtime.toISOString()
};
this.metrics.skill_metrics[skillName] = skillMetrics;
// Log performance assessment
if (performanceScore >= 80) {
this.log(`${skillName}: GOOD performance (score: ${performanceScore})`, 'good');
} else if (performanceScore >= 60) {
this.log(`${skillName}: OK performance (score: ${performanceScore})`, 'info');
} else {
this.log(`${skillName}: POOR performance (score: ${performanceScore})`, 'poor');
}
return skillMetrics;
} catch (error) {
this.log(`Error analyzing ${skillName}: ${error.message}`, 'warn');
return null;
}
}
analyzeWorkflowPerformance(workflowPath, workflowName) {
const startTime = performance.now();
if (!fs.existsSync(workflowPath)) {
this.log(`Skipping workflow ${workflowName} - file not found`, 'warn');
return null;
}
try {
const stats = fs.statSync(workflowPath);
const content = fs.readFileSync(workflowPath, 'utf8');
// Basic metrics
const fileSizeKB = stats.size / 1024;
const lineCount = content.split('\n').length;
const wordCount = content.split(/\s+/).filter(word => word.length > 0).length;
// Workflow-specific metrics
const stepCount = (content.match(/^\d+\./gm) || []).length;
const codeBlockCount = (content.match(/```[\s\S]*?```/g) || []).length;
const skillReferences = (content.match(/@speckit-\w+/g) || []).length;
// Performance score calculation
let performanceScore = 100;
// Size penalties
if (fileSizeKB > 20) performanceScore -= 10;
if (fileSizeKB > 50) performanceScore -= 20;
// Content quality bonuses
if (stepCount > 0) performanceScore += 10;
if (codeBlockCount > 0) performanceScore += 5;
if (skillReferences > 0) performanceScore += 5;
const analysisTime = performance.now() - startTime;
const workflowMetrics = {
workflow_name: workflowName,
file_path: workflowPath,
file_size_kb: Math.round(fileSizeKB * 100) / 100,
line_count: lineCount,
word_count: wordCount,
step_count: stepCount,
code_block_count: codeBlockCount,
skill_references: skillReferences,
performance_score: Math.max(0, Math.min(100, performanceScore)),
analysis_time_ms: Math.round(analysisTime * 100) / 100,
last_modified: stats.mtime.toISOString()
};
this.metrics.workflow_metrics[workflowName] = workflowMetrics;
// Log performance assessment
if (performanceScore >= 80) {
this.log(`${workflowName}: GOOD performance (score: ${performanceScore})`, 'good');
} else if (performanceScore >= 60) {
this.log(`${workflowName}: OK performance (score: ${performanceScore})`, 'info');
} else {
this.log(`${workflowName}: POOR performance (score: ${performanceScore})`, 'poor');
}
return workflowMetrics;
} catch (error) {
this.log(`Error analyzing workflow ${workflowName}: ${error.message}`, 'warn');
return null;
}
}
analyzeSystemMetrics() {
this.log('Analyzing system metrics...', 'info');
// Directory sizes
const agentsSize = this.getDirectorySize(AGENTS_DIR);
const skillsSize = this.getDirectorySize(SKILLS_DIR);
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
const workflowsSize = fs.existsSync(workflowsDir) ? this.getDirectorySize(workflowsDir) : 0;
// File counts
const totalFiles = this.countFiles(AGENTS_DIR);
const skillFiles = this.countFiles(SKILLS_DIR);
const workflowFiles = fs.existsSync(workflowsDir) ? this.countFiles(workflowsDir) : 0;
this.metrics.system_metrics = {
agents_directory_size_kb: Math.round(agentsSize / 1024),
skills_directory_size_kb: Math.round(skillsSize / 1024),
workflows_directory_size_kb: Math.round(workflowsSize / 1024),
total_files: totalFiles,
skill_files: skillFiles,
workflow_files: workflowFiles,
analysis_timestamp: new Date().toISOString()
};
this.log(`System: ${totalFiles} files, ${Math.round(agentsSize / 1024)}KB total`, 'info');
}
getDirectorySize(dirPath) {
let totalSize = 0;
if (!fs.existsSync(dirPath)) {
return 0;
}
const items = fs.readdirSync(dirPath);
for (const item of items) {
const itemPath = path.join(dirPath, item);
const stats = fs.statSync(itemPath);
if (stats.isDirectory()) {
totalSize += this.getDirectorySize(itemPath);
} else {
totalSize += stats.size;
}
}
return totalSize;
}
countFiles(dirPath) {
let fileCount = 0;
if (!fs.existsSync(dirPath)) {
return 0;
}
const items = fs.readdirSync(dirPath);
for (const item of items) {
const itemPath = path.join(dirPath, item);
const stats = fs.statSync(itemPath);
if (stats.isDirectory()) {
fileCount += this.countFiles(itemPath);
} else {
fileCount++;
}
}
return fileCount;
}
generateRecommendations() {
const recommendations = [];
const { skill_metrics, workflow_metrics, system_metrics } = this.metrics;
// Analyze skill performance
const skillScores = Object.values(skill_metrics).map(m => m.performance_score);
const avgSkillScore = skillScores.length > 0 ? skillScores.reduce((a, b) => a + b, 0) / skillScores.length : 0;
if (avgSkillScore < 70) {
recommendations.push({
type: 'performance',
priority: 'high',
message: 'Average skill performance is below optimal. Consider optimizing skill documentation.',
details: `Average score: ${Math.round(avgSkillScore)}`
});
}
// Check for oversized files
const largeSkills = Object.values(skill_metrics).filter(m => m.file_size_kb > 50);
if (largeSkills.length > 0) {
recommendations.push({
type: 'size',
priority: 'medium',
message: `${largeSkills.length} skills have large file sizes (>50KB). Consider breaking down complex skills.`,
details: largeSkills.map(s => `${s.skill_name} (${s.file_size_kb}KB)`).join(', ')
});
}
// Check for missing front matter
const skillsWithoutFrontMatter = Object.values(skill_metrics).filter(m => !m.has_front_matter);
if (skillsWithoutFrontMatter.length > 0) {
recommendations.push({
type: 'structure',
priority: 'high',
message: `${skillsWithoutFrontMatter.length} skills missing front matter. Add proper YAML front matter.`,
details: skillsWithoutFrontMatter.map(s => s.skill_name).join(', ')
});
}
// Analyze workflow performance
const workflowScores = Object.values(workflow_metrics).map(m => m.performance_score);
const avgWorkflowScore = workflowScores.length > 0 ? workflowScores.reduce((a, b) => a + b, 0) / workflowScores.length : 0;
if (avgWorkflowScore < 70) {
recommendations.push({
type: 'performance',
priority: 'medium',
message: 'Average workflow performance could be improved. Add more detailed steps and examples.',
details: `Average score: ${Math.round(avgWorkflowScore)}`
});
}
// System recommendations
if (system_metrics.agents_directory_size_kb > 1000) {
recommendations.push({
type: 'maintenance',
priority: 'low',
message: '.agents directory is growing large. Consider archiving old logs and reports.',
details: `Current size: ${system_metrics.agents_directory_size_kb}KB`
});
}
this.metrics.summary.recommendations = recommendations;
// Log recommendations
if (recommendations.length > 0) {
this.log('Performance Recommendations:', 'info');
recommendations.forEach((rec, index) => {
const priority = rec.priority === 'high' ? 'HIGH' : rec.priority === 'medium' ? 'MED' : 'LOW';
this.log(` ${index + 1}. [${priority}] ${rec.message}`, 'warn');
});
} else {
this.log('No performance issues detected - system is optimized!', 'good');
}
}
calculateOverallPerformance() {
const { skill_metrics, workflow_metrics } = this.metrics;
const skillScores = Object.values(skill_metrics).map(m => m.performance_score);
const workflowScores = Object.values(workflow_metrics).map(m => m.performance_score);
const avgSkillScore = skillScores.length > 0 ? skillScores.reduce((a, b) => a + b, 0) / skillScores.length : 100;
const avgWorkflowScore = workflowScores.length > 0 ? workflowScores.reduce((a, b) => a + b, 0) / workflowScores.length : 100;
// Weight skills more heavily than workflows
const overallScore = (avgSkillScore * 0.7) + (avgWorkflowScore * 0.3);
this.metrics.summary.performance_score = Math.round(overallScore);
this.metrics.summary.average_skill_size = skillScores.length > 0
? Math.round(Object.values(skill_metrics).reduce((sum, m) => sum + m.file_size_kb, 0) / skillScores.length * 100) / 100
: 0;
this.metrics.summary.average_workflow_size = workflowScores.length > 0
? Math.round(Object.values(workflow_metrics).reduce((sum, m) => sum + m.file_size_kb, 0) / workflowScores.length * 100) / 100
: 0;
this.metrics.summary.total_skills_analyzed = skillScores.length;
this.metrics.summary.total_workflows_analyzed = workflowScores.length;
}
generateReport() {
this.metrics.duration = performance.now() - this.startTime;
const report = {
...this.metrics,
generated_at: new Date().toISOString(),
environment: {
node_version: process.version,
platform: process.platform,
memory_usage: process.memoryUsage()
}
};
fs.writeFileSync(PERFORMANCE_REPORT_PATH, JSON.stringify(report, null, 2));
this.log(`Performance report saved to: ${PERFORMANCE_REPORT_PATH}`, 'info');
return report;
}
async runPerformanceAnalysis() {
this.log('Starting performance analysis...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Analyze skills
this.log('Analyzing skill performance...', 'info');
if (fs.existsSync(SKILLS_DIR)) {
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
const itemPath = path.join(SKILLS_DIR, item);
return fs.statSync(itemPath).isDirectory();
});
for (const skillDir of skillDirs) {
const skillPath = path.join(SKILLS_DIR, skillDir);
this.analyzeSkillPerformance(skillPath, skillDir);
}
}
// Analyze workflows
this.log('Analyzing workflow performance...', 'info');
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
if (fs.existsSync(workflowsDir)) {
const workflowFiles = fs.readdirSync(workflowsDir).filter(file => file.endsWith('.md'));
for (const workflowFile of workflowFiles) {
const workflowPath = path.join(workflowsDir, workflowFile);
const workflowName = workflowFile.replace('.md', '');
this.analyzeWorkflowPerformance(workflowPath, workflowName);
}
}
// System metrics
this.analyzeSystemMetrics();
// Calculate overall performance
this.calculateOverallPerformance();
// Generate recommendations
this.generateRecommendations();
// Generate report
const report = this.generateReport();
// Summary
this.log('=== Performance Analysis Summary ===', 'info');
this.log(`Overall performance score: ${this.metrics.summary.performance_score}/100`, 'info');
this.log(`Skills analyzed: ${this.metrics.summary.total_skills_analyzed}`, 'info');
this.log(`Workflows analyzed: ${this.metrics.summary.total_workflows_analyzed}`, 'info');
this.log(`Average skill size: ${this.metrics.summary.average_skill_size}KB`, 'info');
this.log(`Average workflow size: ${this.metrics.summary.average_workflow_size}KB`, 'info');
this.log(`Analysis duration: ${Math.round(this.metrics.duration)}ms`, 'info');
this.log(`Recommendations: ${this.metrics.summary.recommendations.length}`, 'info');
return report;
}
}
// CLI interface
async function main() {
const monitor = new PerformanceMonitor();
try {
const report = await monitor.runPerformanceAnalysis();
process.exit(report.summary.performance_score < 60 ? 1 : 0);
} catch (error) {
console.error('Performance analysis failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { PerformanceMonitor };
// Run if called directly
if (require.main === module) {
main();
}
-198
View File
@@ -1,198 +0,0 @@
# audit-skills.ps1 - Verify skill completeness and health
# Part of LCBP3-DMS Phase 2 improvements
param(
[string]$BaseDir = (Split-Path -Parent (Split-Path -Parent (Split-Path -Parent $PSScriptRoot)))
)
# Map to ConsoleColor enum (Write-Host expects enum, not ANSI strings)
$Colors = @{
Red = 'Red'
Green = 'Green'
Yellow = 'Yellow'
Blue = 'Blue'
NoColor = 'Gray'
}
$AgentsDir = Join-Path $BaseDir ".agents"
$SkillsDir = Join-Path $AgentsDir "skills"
Write-Host "=== Skills Health Audit ===" -ForegroundColor Cyan
Write-Host "Base directory: $BaseDir"
Write-Host ""
# Function to check if skill has required files
function Test-SkillHealth {
param(
[string]$SkillDir
)
$skillName = Split-Path $SkillDir -Leaf
$issues = 0
# Check for SKILL.md
$skillFile = Join-Path $SkillDir "SKILL.md"
if (Test-Path $skillFile) {
Write-Host " OK: $skillName/SKILL.md" -ForegroundColor $Colors.Green
} else {
Write-Host " MISSING: $skillName/SKILL.md" -ForegroundColor $Colors.Red
$issues++
}
# Check for templates directory (optional)
$templatesDir = Join-Path $SkillDir "templates"
if (Test-Path $templatesDir) {
$templateCount = (Get-ChildItem -Path $templatesDir -Filter "*.md" -File | Measure-Object).Count
if ($templateCount -gt 0) {
Write-Host " OK: $skillName/templates ($templateCount files)" -ForegroundColor $Colors.Green
} else {
Write-Host " EMPTY: $skillName/templates (no files)" -ForegroundColor $Colors.Yellow
}
}
# Check SKILL.md content if exists
if (Test-Path $skillFile) {
$content = Get-Content $skillFile -Raw
# Check for required front matter fields
$requiredFields = @('name', 'description', 'version')
foreach ($field in $requiredFields) {
$pattern = "(?m)^${field}:"
if ($content -match $pattern) {
Write-Host " FIELD: $field" -ForegroundColor $Colors.Green
} else {
Write-Host " MISSING FIELD: $field" -ForegroundColor $Colors.Red
$issues++
}
}
# Check for LCBP3 context reference (speckit-* skills)
if ($skillName -like 'speckit-*') {
if ($content -match '_LCBP3-CONTEXT\.md') {
Write-Host " CONTEXT: LCBP3 appendix referenced" -ForegroundColor $Colors.Green
} else {
Write-Host " MISSING: LCBP3 context reference" -ForegroundColor $Colors.Yellow
$issues++
}
}
}
return $issues
}
# Function to get skill version from SKILL.md
function Get-SkillVersion {
param(
[string]$SkillFile
)
if (Test-Path $SkillFile) {
try {
$content = Get-Content $SkillFile -Raw
if ($content -match "(?m)^version:\s*['""]?([0-9]+\.[0-9]+\.[0-9]+)['""]?") {
return $matches[1].Trim()
}
} catch {
return "error"
}
}
return "no_file"
}
# Check skills directory
if (-not (Test-Path $SkillsDir)) {
Write-Host "ERROR: Skills directory not found" -ForegroundColor $Colors.Red
exit 1
}
Write-Host "Scanning skills directory: $SkillsDir"
Write-Host ""
# Get all skill directories
$skillDirs = Get-ChildItem -Path $SkillsDir -Directory | Sort-Object Name
Write-Host "Found $($skillDirs.Count) skill directories"
Write-Host ""
# Audit each skill
$totalIssues = 0
$skillSummary = @()
foreach ($skillDir in $skillDirs) {
$skillName = $skillDir.Name
Write-Host "Auditing: $skillName"
Write-Host "------------------------"
$issues = Test-SkillHealth -SkillDir $skillDir.FullName
$skillVersion = Get-SkillVersion -SkillFile (Join-Path $skillDir.FullName "SKILL.md")
$skillSummary += @{
Name = $skillName
Issues = $issues
Version = $skillVersion
}
$totalIssues += $issues
Write-Host ""
}
# Summary report
Write-Host "=== Skills Audit Summary ===" -ForegroundColor Cyan
Write-Host ""
Write-Host "Skill Status:"
Write-Host "-----------"
foreach ($summary in $skillSummary) {
if ($summary.Issues -eq 0) {
Write-Host " HEALTHY: $($summary.Name) (v$($summary.Version))" -ForegroundColor $Colors.Green
} else {
Write-Host " ISSUES: $($summary.Name) (v$($summary.Version)) - $($summary.Issues) issues" -ForegroundColor $Colors.Red
}
}
Write-Host ""
# Check skills.md version consistency
$skillsVersionFile = Join-Path $SkillsDir "VERSION"
if (Test-Path $skillsVersionFile) {
$content = Get-Content $skillsVersionFile -Raw
if ($content -match "^version:\s*(.+)") {
$globalVersion = $matches[1].Trim()
Write-Host "Global skills version: v$globalVersion"
Write-Host ""
# Check for version mismatches
Write-Host "Version Consistency Check:"
Write-Host "------------------------"
$versionMismatches = 0
foreach ($summary in $skillSummary) {
if ($summary.Version -ne "unknown" -and $summary.Version -ne "no_file" -and $summary.Version -ne $globalVersion) {
Write-Host " MISMATCH: $($summary.Name) is v$($summary.Version), global is v$globalVersion" -ForegroundColor $Colors.Yellow
$versionMismatches++
}
}
if ($versionMismatches -eq 0) {
Write-Host " All skills match global version" -ForegroundColor $Colors.Green
}
}
}
Write-Host ""
# Overall health
if ($totalIssues -eq 0) {
Write-Host "=== SUCCESS: All skills healthy ===" -ForegroundColor $Colors.Green
Write-Host "Total skills: $($skillDirs.Count)"
exit 0
} else {
Write-Host "=== ISSUES FOUND: $totalIssues total issues ===" -ForegroundColor $Colors.Red
Write-Host ""
Write-Host "Recommendations:"
Write-Host "1. Fix missing SKILL.md files"
Write-Host "2. Add required front matter fields"
Write-Host "3. Ensure Role and Task sections exist"
Write-Host "4. Align skill versions with global version"
exit 1
}
@@ -1,110 +0,0 @@
# validate-versions.ps1 - Check version consistency across .agents files
# Part of LCBP3-DMS Phase 2 improvements
param(
[string]$BaseDir = (Split-Path -Parent (Split-Path -Parent (Split-Path -Parent $PSScriptRoot))),
[string]$ExpectedVersion = "1.8.9"
)
# Map to ConsoleColor enum (Write-Host expects enum, not ANSI)
$Colors = @{
Red = 'Red'
Green = 'Green'
Yellow = 'Yellow'
NoColor = 'Gray'
}
$AgentsDir = Join-Path $BaseDir ".agents"
Write-Host "=== .agents Version Validation ===" -ForegroundColor Cyan
Write-Host "Base directory: $BaseDir"
Write-Host "Expected version: $ExpectedVersion"
Write-Host ""
# Function to extract version from file
function Get-VersionFromFile {
param(
[string]$FilePath,
[string]$Pattern
)
if (Test-Path $FilePath) {
try {
$content = Get-Content $FilePath -Raw
if ($content -match $Pattern) {
return $matches[1]
} else {
return "NOT_FOUND"
}
} catch {
return "ERROR"
}
} else {
return "FILE_NOT_FOUND"
}
}
# Files to check
$FilesToCheck = @{
(Join-Path $AgentsDir "skills\VERSION") = "version: ([0-9]+\.[0-9]+\.[0-9]+)"
(Join-Path $AgentsDir "skills\skills.md") = "V([0-9]+\.[0-9]+\.[0-9]+)"
}
# Track issues
$Issues = 0
Write-Host "Checking version consistency..."
Write-Host ""
foreach ($file in $FilesToCheck.Keys) {
$pattern = $FilesToCheck[$file]
$relativePath = $file.Replace($BaseDir + "\", "")
$version = Get-VersionFromFile -FilePath $file -Pattern $pattern
if ($version -eq "NOT_FOUND" -or $version -eq "FILE_NOT_FOUND") {
Write-Host " ERROR: $relativePath - Version not found" -ForegroundColor $Colors.Red
$Issues++
} elseif ($version -ne $ExpectedVersion) {
Write-Host " ERROR: $relativePath - Found v$version, expected v$ExpectedVersion" -ForegroundColor $Colors.Red
$Issues++
} else {
Write-Host " OK: $relativePath - v$version" -ForegroundColor $Colors.Green
}
}
Write-Host ""
# Check for version mismatches in skill files
Write-Host "Checking skill file versions..."
$SkillsVersionFile = Join-Path $AgentsDir "skills\VERSION"
if (Test-Path $SkillsVersionFile) {
$skillsVersion = Get-VersionFromFile -FilePath $SkillsVersionFile -Pattern "version: ([0-9]+\.[0-9]+\.[0-9]+)"
Write-Host "Skills version file: v$skillsVersion"
}
# Check workflow versions (in .windsurf\workflows)
$WorkflowsDir = Join-Path $BaseDir ".windsurf\workflows"
if (Test-Path $WorkflowsDir) {
Write-Host "Checking workflow files..."
$workflowCount = (Get-ChildItem -Path $WorkflowsDir -Filter "*.md" -File | Measure-Object).Count
Write-Host " OK: Found $workflowCount workflow files" -ForegroundColor $Colors.Green
} else {
Write-Host " WARNING: Workflows directory not found at $WorkflowsDir" -ForegroundColor $Colors.Yellow
}
Write-Host ""
# Summary
if ($Issues -eq 0) {
Write-Host "=== SUCCESS: All versions consistent ===" -ForegroundColor $Colors.Green
exit 0
} else {
Write-Host "=== FAILED: $Issues version issues found ===" -ForegroundColor $Colors.Red
Write-Host ""
Write-Host "To fix version issues:"
Write-Host "1. Update files to use v$ExpectedVersion"
Write-Host "2. Ensure LCBP3 project version matches"
Write-Host "3. Run this script again to verify"
exit 1
}
-109
View File
@@ -1,109 +0,0 @@
# `.agents/skills/` — LCBP3 Agent Skill Pack
**Version:** 1.8.9 | **Last Updated:** 2026-04-22 | **Total Skills:** 20
Agent skills for AI-assisted development in **Windsurf IDE** (and compatible agents: Codex CLI, opencode, Amp, Antigravity, AGENTS.md-aware tools).
---
## 📂 Layout
```
.agents/skills/
├── VERSION # Single source of truth for skill-pack version
├── skills.md # Overview + dependency matrix + health monitoring
├── _LCBP3-CONTEXT.md # Shared LCBP3 context injected into every speckit-* skill
├── README.md # (this file)
├── nestjs-best-practices/ # Backend rules (40 rules across 10 categories)
├── next-best-practices/ # Frontend rules (Next.js 15+)
└── speckit-*/ # 18 workflow skills (spec → plan → tasks → implement → …)
```
Each skill directory contains:
- `SKILL.md` — frontmatter (`name`, `description`, `version: 1.8.9`, `scope`, `depends-on`, `handoffs`) + instructions
- `templates/` _(optional)_ — artifact templates (spec/plan/tasks/checklist)
- `rules/` _(nestjs only)_ — individual rule files grouped by prefix (`arch-`, `security-`, `db-`, etc.)
---
## 🚀 How Windsurf Invokes These Skills
Windsurf exposes two entry points:
1. **Skill tool** — Windsurf discovers skills by scanning `.agents/skills/*/SKILL.md` frontmatter. Skills marked `user-invocable: false` are used silently by Cascade.
2. **Slash commands**`.windsurf/workflows/*.md` wraps each skill as a slash command (e.g. `/04-speckit.plan`). The workflow file is short; the heavy lifting is delegated to the skill via `skill` tool.
Both paths end up executing the same `SKILL.md` instructions.
---
## 🧭 Typical Flow
```
/01-speckit.constitution → AGENTS.md / product vision
/02-speckit.specify → specs/feat-XXX/spec.md
/03-speckit.clarify → updates spec.md (up to 5 targeted questions)
/04-speckit.plan → specs/feat-XXX/plan.md + data-model.md + contracts/
/05-speckit.tasks → specs/feat-XXX/tasks.md
/06-speckit.analyze → cross-artifact consistency report (read-only)
/07-speckit.implement → executes tasks with Ironclad Protocols (Blast Radius + Strangler + TDD)
/08-speckit.checker → pnpm lint / typecheck / markdown-lint
/09-speckit.tester → pnpm test + coverage gates (Backend 70%+, Business Logic 80%+)
/10-speckit.reviewer → code review with Tier 1/2/3 classification
/11-speckit.validate → UAT / acceptance-criteria.md
```
Use `/00-speckit.all` to run specify → clarify → plan → tasks → analyze in one go.
---
## 🛠️ Helper Scripts
From repo root:
| Script | Purpose |
| --- | --- |
| `./.agents/scripts/bash/check-prerequisites.sh --json` | Emit `FEATURE_DIR` + `AVAILABLE_DOCS` for a feature branch |
| `./.agents/scripts/bash/setup-plan.sh --json` | Emit `FEATURE_SPEC`, `IMPL_PLAN`, `SPECS_DIR`, `BRANCH` |
| `./.agents/scripts/bash/update-agent-context.sh windsurf` | Append tech entries to `AGENTS.md` |
| `./.agents/scripts/bash/audit-skills.sh` | Validate all `SKILL.md` frontmatter + presence |
| `./.agents/scripts/bash/validate-versions.sh` | Version consistency check |
| `./.agents/scripts/bash/sync-workflows.sh` | Verify every skill has a `.windsurf/workflows/*.md` wrapper |
All scripts mirror to `.agents/scripts/powershell/*.ps1` for Windows.
---
## ⚠️ Tier 1 Non-Negotiables (auto-enforced)
- ADR-019 — `publicId` exposed directly; no `parseInt` / `Number` / `+` on UUID; no `id ?? ''` fallback
- ADR-009 — edit SQL schema directly, no TypeORM migrations
- ADR-016 — JWT + CASL on every mutation; `Idempotency-Key` required; ClamAV two-phase upload
- ADR-018 — AI via DMS API only (Ollama on Admin Desktop; no direct DB/storage)
- ADR-007 — layered error classification (Validation / Business / System)
- Zero `any`, zero `console.log` (use `Logger`)
See [`_LCBP3-CONTEXT.md`](./_LCBP3-CONTEXT.md) for the complete list.
---
## 🤝 Extending
To add a new skill:
1. Create `NAME/SKILL.md` with frontmatter: `name`, `description`, `version: 1.8.9`, `scope`, `depends-on`.
2. Append an LCBP3 context reference pointing to `_LCBP3-CONTEXT.md`.
3. Wrap with `.windsurf/workflows/NAME.md` so it becomes a slash command.
4. Update [`skills.md`](./skills.md) dependency matrix.
5. Run `./.agents/scripts/bash/audit-skills.sh` → must pass.
---
## 📚 References
- **Canonical rules:** `AGENTS.md` (repo root)
- **Product vision:** `specs/00-Overview/00-03-product-vision.md`
- **ADRs:** `specs/06-Decision-Records/`
- **Engineering guidelines:** `specs/05-Engineering-Guidelines/`
- **Contributing:** `CONTRIBUTING.md`
+2 -17
View File
@@ -1,25 +1,10 @@
# Speckit Skills Version # Speckit Skills Version
version: 1.8.9 version: 1.1.0
release_date: 2026-04-22 release_date: 2026-01-24
## Changelog ## Changelog
### 1.8.9 (2026-04-22)
- Full LCBP3-native rebuild of `.agents/skills/`
- Fixed ADR-019 drift (removed `@Expose({ name: 'id' })` and `id ?? ''` fallback patterns)
- Replaced all dead references (`GEMINI.md` → `AGENTS.md`, v1.7.0 → v1.8.0 schema, `.specify/memory/` → `AGENTS.md`)
- Added real helper scripts under `.agents/scripts/bash/` and `.agents/scripts/powershell/`
- Added ADR-007/008/020/021 coverage
- New rules: workflow-engine, file-two-phase-upload, ai-boundary, i18n, file-upload, workflow-banner
- Standardized frontmatter across all 20 skills (`version: 1.8.9`)
### 1.8.6 (2026-04-14)
- Version alignment with LCBP3-DMS v1.8.6
- Complete skill implementations for all 20 skills
- Enhanced security and audit capabilities
- Production-ready deployment status
### 1.1.0 (2026-01-24) ### 1.1.0 (2026-01-24)
- New QA skills: tester, reviewer, checker - New QA skills: tester, reviewer, checker
- tester: Execute tests, measure coverage, report results - tester: Execute tests, measure coverage, report results
-91
View File
@@ -1,91 +0,0 @@
# 🧭 LCBP3-DMS Context Appendix (Shared)
> This file is included/referenced by every Speckit skill as the authoritative project context.
> Skills **must** load it (or the files it links to) before generating any artifact.
**Project:** NAP-DMS (LCBP3) — Laem Chabang Port Phase 3 Document Management System
**Stack:** NestJS 11 + Next.js 16 + TypeScript + MariaDB 11.8 + Redis + BullMQ + Elasticsearch + Ollama (on-prem AI)
**Version:** 1.8.9 (2026-04-18)
---
## 📌 Canonical Rule Sources (read in this order)
1. **`AGENTS.md`** (repo root) — primary rule file for AI agents; supersedes legacy `GEMINI.md`.
2. **`specs/06-Decision-Records/`** — architectural decisions (22 ADRs); ADR priority > Engineering Guidelines.
3. **`specs/05-Engineering-Guidelines/`** — backend/frontend/testing/i18n/git patterns.
4. **`specs/00-Overview/00-02-glossary.md`** — domain terminology (Correspondence / RFA / Transmittal / Circulation).
5. **`specs/00-Overview/00-03-product-vision.md`** — project constitution (Vision, Strategic Pillars, Guardrails).
6. **`CONTRIBUTING.md`** — spec writing standards, PR template, review levels.
7. **`README.md`** — technology stack + getting started.
---
## 🔴 Tier 1 Non-Negotiables
- **ADR-019 UUID:** `publicId: string` exposed directly — **no** `@Expose({ name: 'id' })` rename; **no** `parseInt`/`Number`/`+` on UUID; **no** `id ?? ''` fallback in frontend.
- **ADR-009:** No TypeORM migrations — edit `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` or add a `deltas/*.sql` file.
- **ADR-016 Security:** JWT + CASL 4-Level RBAC; `@UseGuards(JwtAuthGuard, CaslAbilityGuard)` on every mutation controller; `ThrottlerGuard` on auth; bcrypt 12 rounds; `Idempotency-Key` required on POST/PUT/PATCH.
- **ADR-002 Document Numbering:** Redis Redlock + TypeORM `@VersionColumn` (double-lock). Never use application-side counter alone.
- **ADR-008 Notifications:** BullMQ queue — never inline email/notification in a request thread.
- **ADR-018 AI Boundary:** Ollama on Admin Desktop only; AI → DMS API → DB (never direct DB/storage). Human-in-the-loop validation required.
- **ADR-007 Error Handling:** Layered (Validation / Business / System); `BusinessException` hierarchy; user-friendly `userMessage` + `recoveryAction`; technical stack only in logs.
- **TypeScript Strict:** Zero `any`, zero `console.log` (use NestJS `Logger`).
- **i18n:** No hardcoded Thai/English strings in components — use i18n keys (see `05-08-i18n-guidelines.md`).
- **File Upload:** Two-phase (Temp → ClamAV → Permanent), whitelist `PDF/DWG/DOCX/XLSX/ZIP`, max 50MB, `StorageService` only.
---
## 🏷️ Domain Glossary (reject generic terms)
| ✅ Use | ❌ Don't Use |
| --- | --- |
| Correspondence | Letter, Communication, Document |
| RFA | Approval Request, Submit for Approval |
| Transmittal | Delivery Note, Cover Letter |
| Circulation | Distribution, Routing |
| Shop Drawing | Construction Drawing |
| Contract Drawing | Design Drawing, Blueprint |
| Workflow Engine | Approval Flow, Process Engine |
| Document Numbering | Document ID, Auto Number |
---
## 📁 Key Files for Generating / Validating Artifacts
| When you need... | Read |
| --- | --- |
| A new feature spec | `.agents/skills/speckit-specify/templates/spec-template.md` + `specs/01-Requirements/01-06-edge-cases-and-rules.md` |
| A plan | `.agents/skills/speckit-plan/templates/plan-template.md` + relevant ADRs |
| Task breakdown | `.agents/skills/speckit-tasks/templates/tasks-template.md` + existing patterns in `specs/08-Tasks/` |
| Acceptance criteria / UAT | `specs/01-Requirements/01-05-acceptance-criteria.md` |
| Schema / table definition | `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` + `03-01-data-dictionary.md` |
| RBAC / permissions | `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql` + `01-02-01-rbac-matrix.md` |
| Release / hotfix | `specs/04-Infrastructure-OPS/04-08-release-management-policy.md` |
---
## 🛠️ Helper Scripts (real paths in this repo)
- `./.agents/scripts/bash/check-prerequisites.sh` / `powershell/*.ps1`
- `./.agents/scripts/bash/setup-plan.sh`
- `./.agents/scripts/bash/update-agent-context.sh windsurf`
- `./.agents/scripts/bash/audit-skills.sh`
- `./.agents/scripts/bash/validate-versions.sh`
- `./.agents/scripts/bash/sync-workflows.sh`
---
## ✅ Commit Checklist (applied automatically by speckit-implement)
- [ ] UUID pattern verified (no `parseInt` / `Number` / `+` on UUID, no `id ?? ''` fallback)
- [ ] No `any`, no `console.log` in committed code
- [ ] Business comments in Thai, code identifiers in English
- [ ] Schema changes via SQL directly (not migration)
- [ ] Test coverage meets targets (Backend 70%+, Business Logic 80%+)
- [ ] Relevant ADRs referenced (007/008/009/016/018/019/020/021)
- [ ] Domain glossary terms used correctly
- [ ] Error handling: `Logger` + `HttpException` / `BusinessException`
- [ ] i18n keys used (no hardcode text)
- [ ] Cache invalidation when data mutated
- [ ] OWASP Top 10 review passed
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
@@ -36,12 +36,11 @@ npx skills add Kadajett/agent-nestjs-skills -a claude-code -a cursor
- `area-description.md` - Individual rule files - `area-description.md` - Individual rule files
- `scripts/` - Build scripts and utilities - `scripts/` - Build scripts and utilities
- `metadata.json` - Document metadata (version, organization, abstract) - `metadata.json` - Document metadata (version, organization, abstract)
- **`AGENTS.md`** - Compiled output (generated) - __`AGENTS.md`__ - Compiled output (generated)
## Getting Started ## Getting Started
1. Install dependencies: 1. Install dependencies:
```bash ```bash
cd scripts && npm install cd scripts && npm install
``` ```
@@ -75,7 +74,7 @@ npx skills add Kadajett/agent-nestjs-skills -a claude-code -a cursor
Each rule file should follow this structure: Each rule file should follow this structure:
````markdown ```markdown
--- ---
title: Rule Title Here title: Rule Title Here
impact: MEDIUM impact: MEDIUM
@@ -92,7 +91,6 @@ Brief explanation of the rule and why it matters.
```typescript ```typescript
// Bad code example // Bad code example
``` ```
````
**Correct (description of what's right):** **Correct (description of what's right):**
@@ -104,6 +102,7 @@ Optional explanatory text after examples.
Reference: [NestJS Documentation](https://docs.nestjs.com) Reference: [NestJS Documentation](https://docs.nestjs.com)
## File Naming Convention ## File Naming Convention
- Files starting with `_` are special (excluded from build) - Files starting with `_` are special (excluded from build)
@@ -115,7 +114,7 @@ Reference: [NestJS Documentation](https://docs.nestjs.com)
## Impact Levels ## Impact Levels
| Level | Description | | Level | Description |
| ----------- | ------------------------------------------------------------------------------------- | |-------|-------------|
| CRITICAL | Violations cause runtime errors, security vulnerabilities, or architectural breakdown | | CRITICAL | Violations cause runtime errors, security vulnerabilities, or architectural breakdown |
| HIGH | Significant impact on reliability, security, or maintainability | | HIGH | Significant impact on reliability, security, or maintainability |
| MEDIUM-HIGH | Notable impact on quality and developer experience | | MEDIUM-HIGH | Notable impact on quality and developer experience |
@@ -161,3 +160,4 @@ These NestJS skills work with:
- [Claude Code](https://claude.ai/code) - Anthropic's official CLI - [Claude Code](https://claude.ai/code) - Anthropic's official CLI
- [AdaL](https://sylph.ai/adal) - Self-evolving AI coding agent with MCP support - [AdaL](https://sylph.ai/adal) - Self-evolving AI coding agent with MCP support
+8 -139
View File
@@ -1,12 +1,10 @@
--- ---
name: nestjs-best-practices name: nestjs-best-practices
description: NestJS best practices and architecture patterns for building production-ready LCBP3-DMS backend code. Enforces ADR-009 (no TypeORM migrations), ADR-019 (hybrid UUID), ADR-016 (security), ADR-007 (error handling), ADR-008 (BullMQ), ADR-001/002 (workflow + numbering), ADR-018/020 (AI boundary), and ADR-021 (workflow context). description: NestJS best practices and architecture patterns for building production-ready applications. This skill should be used when writing, reviewing, or refactoring NestJS code to ensure proper patterns for modules, dependency injection, security, and performance.
version: 1.8.9
scope: backend
user-invocable: false
license: MIT license: MIT
metadata: metadata:
upstream: 'Kadajett/nestjs-best-practices v1.1.0 (forked + LCBP3-aligned)' author: Kadajett
version: "1.1.0"
--- ---
# NestJS Best Practices # NestJS Best Practices
@@ -27,7 +25,7 @@ Reference these guidelines when:
## Rule Categories by Priority ## Rule Categories by Priority
| Priority | Category | Impact | Prefix | | Priority | Category | Impact | Prefix |
| -------- | -------------------- | ----------- | ----------- | |----------|----------|--------|--------|
| 1 | Architecture | CRITICAL | `arch-` | | 1 | Architecture | CRITICAL | `arch-` |
| 2 | Dependency Injection | CRITICAL | `di-` | | 2 | Dependency Injection | CRITICAL | `di-` |
| 3 | Error Handling | HIGH | `error-` | | 3 | Error Handling | HIGH | `error-` |
@@ -88,10 +86,9 @@ Reference these guidelines when:
### 7. Database & ORM (MEDIUM-HIGH) ### 7. Database & ORM (MEDIUM-HIGH)
- `db-hybrid-identifier` - **CRITICAL** ADR-019: INT PK + UUID public API - `db-use-transactions` - Transaction management
- `db-avoid-n-plus-one` - HIGH N+1 query prevention - `db-avoid-n-plus-one` - Avoid N+1 query problems
- `db-use-transactions` - HIGH Transaction management - `db-use-migrations` - Use migrations for schema changes
- `db-no-typeorm-migrations` - **CRITICAL** ADR-009: No TypeORM migrations - use SQL files
### 8. API Design (MEDIUM) ### 8. API Design (MEDIUM)
@@ -112,134 +109,7 @@ Reference these guidelines when:
- `devops-use-logging` - Structured logging - `devops-use-logging` - Structured logging
- `devops-graceful-shutdown` - Zero-downtime deployments - `devops-graceful-shutdown` - Zero-downtime deployments
### 11. LCBP3-Specific (CRITICAL — Project Overrides) ## How to Use
- `db-no-typeorm-migrations`**CRITICAL** ADR-009: edit SQL directly
- `lcbp3-workflow-engine`**CRITICAL** ADR-001/002/021: DSL state machine + double-lock numbering + workflow context
- `security-file-two-phase-upload`**CRITICAL** ADR-016: Upload → Temp → ClamAV → Commit
- `lcbp3-ai-boundary`**CRITICAL** ADR-018/020: Ollama on-prem only, human-in-the-loop
## NAP-DMS Project-Specific Rules (MUST FOLLOW)
These rules override general NestJS best practices for the NAP-DMS project:
### ADR-009: No TypeORM Migrations
- **ห้ามสร้างไฟล์ migration ของ TypeORM**
- แก้ไข schema โดยตรงที่: `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
- ใช้ n8n workflow สำหรับ data migration ถ้าจำเป็น
### ADR-019: Hybrid Identifier Strategy (CRITICAL — March 2026 Pattern)
> **Updated pattern:** `UuidBaseEntity` exposes `publicId` **directly**. ห้ามใช้ `@Expose({ name: 'id' })` — API จะคืน `publicId` เป็น field name ตรงๆ.
```typescript
// ✅ CORRECT — ใช้ UuidBaseEntity
@Entity()
export class Project extends UuidBaseEntity {
// publicId (string UUIDv7) + id (INT, @Exclude) สืบทอดจาก UuidBaseEntity
// API response → { publicId: "019505a1-7c3e-7000-8000-abc123..." }
@Column()
projectCode: string;
@Column()
projectName: string;
}
```
```typescript
// ❌ WRONG — pattern เก่า ห้ามใช้
@Entity()
export class OldProject {
@PrimaryGeneratedColumn()
@Exclude()
id: number;
@Column({ type: 'uuid' })
@Expose({ name: 'id' }) // ❌ อย่า rename publicId เป็น 'id'
publicId: string;
}
```
**DTO Input (รับ UUID จาก Frontend):**
```typescript
export class CreateContractDto {
@IsUUID('7')
projectUuid: string; // รับ UUID string จาก client
}
// Controller resolves UUID → INT internally
@Post()
async create(@Body() dto: CreateContractDto) {
const projectId = await this.projectService.resolveInternalId(dto.projectUuid);
return this.contractService.create({ ...dto, projectId });
}
```
**ห้ามเด็ดขาด (CI Blocker):**
- ❌ `parseInt(projectPublicId)` — "019505…" → 19 (silently wrong)
- ❌ `Number(publicId)` / `+publicId` — NaN
- ❌ `@Expose({ name: 'id' })` บน `publicId` (pattern เก่า)
- ❌ Expose INT `id` ใน API response (ต้อง `@Exclude()` เสมอ)
### Two-Phase File Upload
```typescript
// Phase 1: Upload to temp
@Post('upload')
async uploadFile(@UploadedFile() file: Express.Multer.File) {
await this.virusScan(file);
const tempId = await this.fileStorage.saveToTemp(file);
return { temp_id: tempId, expires_at: addHours(new Date(), 24) };
}
// Phase 2: Commit in transaction
async createEntity(dto: CreateDto, tempIds: string[]) {
return this.dataSource.transaction(async (manager) => {
const entity = await manager.save(Entity, dto);
await this.fileStorage.commitFiles(tempIds, entity.id, manager);
return entity;
});
}
```
### Idempotency Requirement
- ทุก POST/PUT/PATCH ที่สำคัญต้องมี `Idempotency-Key` header
- ใช้ `IdempotencyInterceptor` ที่มีอยู่แล้ว
### Document Numbering (Double-Lock)
```typescript
async generateNextNumber(context: NumberingContext): Promise<string> {
const lockKey = `doc_num:${context.projectId}:${context.typeId}`;
const lock = await this.redisLock.acquire(lockKey, 3000);
try {
const counter = await this.counterRepo.findOne({
where: context,
lock: { mode: 'optimistic' },
});
counter.last_number++;
return this.formatNumber(await this.counterRepo.save(counter));
} finally {
await lock.release();
}
}
```
### Anti-Patterns (ห้ามทำ)
- ❌ ใช้ SQL Triggers สำหรับ business logic
- ❌ ใช้ `.env` ใน production (ใช้ Docker ENV)
- ❌ ใช้ `any` type (strict mode enforced)
- ❌ ใช้ `console.log` (ใช้ NestJS Logger)
- ❌ สร้างตาราง routing แยก (ใช้ Workflow Engine)
---
Read individual rule files for detailed explanations and code examples: Read individual rule files for detailed explanations and code examples:
@@ -250,7 +120,6 @@ rules/_sections.md
``` ```
Each rule file contains: Each rule file contains:
- Brief explanation of why it matters - Brief explanation of why it matters
- Incorrect code example with explanation - Incorrect code example with explanation
- Correct code example with explanation - Correct code example with explanation
@@ -1,24 +0,0 @@
{
"version": "1.8.9",
"organization": "**NAP-DMS / LCBP3** — Laem Chabang Port Phase 3 Document Management System",
"date": "2026-04-22",
"abstract": "Comprehensive NestJS best-practices guide compiled for the LCBP3-DMS backend. Contains 40+ rules across 11 categories (10 general + 1 project-specific), prioritized by impact. Forked from Kadajett/nestjs-best-practices (v1.1.0) and aligned to LCBP3 ADRs: ADR-001 (workflow engine), ADR-002 (document numbering), ADR-007 (error handling), ADR-008 (notifications/BullMQ), ADR-009 (no TypeORM migrations), ADR-016 (security), ADR-018/020 (AI boundary), ADR-019 (hybrid UUID identifier — March 2026 pattern), and ADR-021 (workflow context).\n\nThis document is the single, consolidated reference used by Cascade and other AI coding agents when writing, reviewing, or refactoring backend code in this repository. All LCBP3-specific overrides live in section 11.",
"references": [
"[AGENTS.md (root)](../../../AGENTS.md) — canonical AI agent rules",
"[CONTRIBUTING.md](../../../CONTRIBUTING.md) — spec authoring + PR process",
"[ADR-001 Unified Workflow Engine](../../../specs/06-Decision-Records/ADR-001-unified-workflow-engine.md)",
"[ADR-002 Document Numbering Strategy](../../../specs/06-Decision-Records/ADR-002-document-numbering-strategy.md)",
"[ADR-007 Error Handling Strategy](../../../specs/06-Decision-Records/ADR-007-error-handling-strategy.md)",
"[ADR-008 Email/Notification Strategy](../../../specs/06-Decision-Records/ADR-008-email-notification-strategy.md)",
"[ADR-009 Database Migration Strategy](../../../specs/06-Decision-Records/ADR-009-database-migration-strategy.md)",
"[ADR-016 Security & Authentication](../../../specs/06-Decision-Records/ADR-016-security-authentication.md)",
"[ADR-018 AI Boundary](../../../specs/06-Decision-Records/ADR-018-ai-boundary.md)",
"[ADR-019 Hybrid Identifier Strategy](../../../specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md)",
"[ADR-020 AI Intelligence Integration](../../../specs/06-Decision-Records/ADR-020-ai-intelligence-integration.md)",
"[ADR-021 Workflow Context](../../../specs/06-Decision-Records/ADR-021-workflow-context.md)",
"[Backend Engineering Guidelines](../../../specs/05-Engineering-Guidelines/05-02-backend-guidelines.md)",
"[Schema — v1.8.0 Tables](../../../specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql)",
"[Data Dictionary](../../../specs/03-Data-and-Storage/03-01-data-dictionary.md)",
"Upstream: [Kadajett/nestjs-best-practices](https://github.com/Kadajett/nestjs-best-practices) v1.1.0"
]
}
@@ -126,7 +126,7 @@ export class UsersController {
@SerializeOptions({ type: UserResponseDto }) @SerializeOptions({ type: UserResponseDto })
async findAll(): Promise<UserResponseDto[]> { async findAll(): Promise<UserResponseDto[]> {
const users = await this.usersService.findAll(); const users = await this.usersService.findAll();
return users.map((u) => plainToInstance(UserResponseDto, u)); return users.map(u => plainToInstance(UserResponseDto, u));
} }
@Get(':id') @Get(':id')
@@ -159,7 +159,10 @@ export class UsersService {
@Controller('users') @Controller('users')
export class UsersController { export class UsersController {
@Get(':id') @Get(':id')
async findOne(@Param('id') id: string, @Headers('X-API-Version') version: string = '1'): Promise<any> { async findOne(
@Param('id') id: string,
@Headers('X-API-Version') version: string = '1',
): Promise<any> {
return this.usersService.findOne(id, version); return this.usersService.findOne(id, version);
} }
} }
@@ -1,7 +1,7 @@
--- ---
title: Avoid Circular Dependencies title: Avoid Circular Dependencies
impact: CRITICAL impact: CRITICAL
impactDescription: '#1 cause of runtime crashes' impactDescription: "#1 cause of runtime crashes"
tags: architecture, modules, dependencies tags: architecture, modules, dependencies
--- ---
@@ -1,7 +1,7 @@
--- ---
title: Organize by Feature Modules title: Organize by Feature Modules
impact: CRITICAL impact: CRITICAL
impactDescription: '3-5x faster onboarding and development' impactDescription: "3-5x faster onboarding and development"
tags: architecture, modules, organization tags: architecture, modules, organization
--- ---
@@ -1,7 +1,7 @@
--- ---
title: Single Responsibility for Services title: Single Responsibility for Services
impact: CRITICAL impact: CRITICAL
impactDescription: '40%+ improvement in testability' impactDescription: "40%+ improvement in testability"
tags: architecture, services, single-responsibility tags: architecture, services, single-responsibility
--- ---
@@ -19,7 +19,7 @@ export class UserAndOrderService {
private userRepo: UserRepository, private userRepo: UserRepository,
private orderRepo: OrderRepository, private orderRepo: OrderRepository,
private mailer: MailService, private mailer: MailService,
private payment: PaymentService private payment: PaymentService,
) {} ) {}
async createUser(dto: CreateUserDto) { async createUser(dto: CreateUserDto) {
@@ -90,7 +90,7 @@ export class OrdersController {
constructor( constructor(
private orders: OrdersService, private orders: OrdersService,
private payment: PaymentService, private payment: PaymentService,
private notifications: NotificationService private notifications: NotificationService,
) {} ) {}
@Post() @Post()
@@ -20,7 +20,7 @@ export class OrdersService {
private emailService: EmailService, private emailService: EmailService,
private analyticsService: AnalyticsService, private analyticsService: AnalyticsService,
private notificationService: NotificationService, private notificationService: NotificationService,
private loyaltyService: LoyaltyService private loyaltyService: LoyaltyService,
) {} ) {}
async createOrder(dto: CreateOrderDto): Promise<Order> { async createOrder(dto: CreateOrderDto): Promise<Order> {
@@ -51,7 +51,7 @@ export class OrderCreatedEvent {
public readonly orderId: string, public readonly orderId: string,
public readonly userId: string, public readonly userId: string,
public readonly items: OrderItem[], public readonly items: OrderItem[],
public readonly total: number public readonly total: number,
) {} ) {}
} }
@@ -60,14 +60,17 @@ export class OrderCreatedEvent {
export class OrdersService { export class OrdersService {
constructor( constructor(
private eventEmitter: EventEmitter2, private eventEmitter: EventEmitter2,
private repo: Repository<Order> private repo: Repository<Order>,
) {} ) {}
async createOrder(dto: CreateOrderDto): Promise<Order> { async createOrder(dto: CreateOrderDto): Promise<Order> {
const order = await this.repo.save(dto); const order = await this.repo.save(dto);
// Emit event - no knowledge of consumers // Emit event - no knowledge of consumers
this.eventEmitter.emit('order.created', new OrderCreatedEvent(order.id, order.userId, order.items, order.total)); this.eventEmitter.emit(
'order.created',
new OrderCreatedEvent(order.id, order.userId, order.items, order.total),
);
return order; return order;
} }
@@ -15,7 +15,9 @@ Create custom repositories to encapsulate complex queries and database logic. Th
// Complex queries in services // Complex queries in services
@Injectable() @Injectable()
export class UsersService { export class UsersService {
constructor(@InjectRepository(User) private repo: Repository<User>) {} constructor(
@InjectRepository(User) private repo: Repository<User>,
) {}
async findActiveWithOrders(minOrders: number): Promise<User[]> { async findActiveWithOrders(minOrders: number): Promise<User[]> {
// Complex query logic mixed with business logic // Complex query logic mixed with business logic
@@ -40,7 +42,9 @@ export class UsersService {
// Custom repository with encapsulated queries // Custom repository with encapsulated queries
@Injectable() @Injectable()
export class UsersRepository { export class UsersRepository {
constructor(@InjectRepository(User) private repo: Repository<User>) {} constructor(
@InjectRepository(User) private repo: Repository<User>,
) {}
async findById(id: string): Promise<User | null> { async findById(id: string): Promise<User | null> {
return this.repo.findOne({ where: { id } }); return this.repo.findOne({ where: { id } });
@@ -1,229 +0,0 @@
---
title: Hybrid Identifier Strategy (ADR-019)
impact: CRITICAL
impactDescription: Use INT PK internally + UUID for public API per project ADR-019
tags: database, uuid, identifier, adr-019, api-design, typeorm
---
## Hybrid Identifier Strategy (ADR-019) — March 2026 Pattern
**This project follows ADR-019: INT Primary Key (internal) + UUIDv7 (public API)**
Unlike standard practices that use UUID as the primary key, this project uses a **hybrid approach** optimized for MariaDB performance and API consistency.
> **Updated pattern (March 2026):** Entities extend `UuidBaseEntity`. The `publicId` column is exposed **directly** in API responses — ห้ามใช้ `@Expose({ name: 'id' })` เพื่อ rename.
### The Strategy
| Layer | Field | Type | Usage |
| --------------- | ---------- | ----------------------------------- | ------------------------------------------------- |
| **Database PK** | `id` | `INT AUTO_INCREMENT` | Internal foreign keys only (marked `@Exclude()`) |
| **Public API** | `publicId` | `MariaDB UUID` (native, BINARY(16)) | External references, URLs — exposed as-is |
| **DTO Input** | `xxxUuid` | `string` (UUIDv7) | Accept UUID in create/update DTOs |
| **DTO Output** | `publicId` | `string` (UUIDv7) | API returns `publicId` field directly (no rename) |
### Why Hybrid IDs?
- **Performance**: INT PK is faster for joins and indexing than UUID
- **Security**: Internal IDs never exposed in API (enumerable IDs are a risk)
- **Compatibility**: UUID works well with distributed systems and external integrations
- **MariaDB Native**: Uses MariaDB's native UUID type (stored as BINARY(16), auto-converts to string)
### Entity Definition (Current Pattern)
```typescript
import { Entity, Column } from 'typeorm';
import { UuidBaseEntity } from '@/common/entities/uuid-base.entity';
@Entity('contracts')
export class Contract extends UuidBaseEntity {
// publicId (string UUIDv7) + id (INT, @Exclude) สืบทอดจาก UuidBaseEntity
// API response → { publicId: "019505a1-7c3e-7000-8000-abc123...", contractCode: ..., ... }
@Column()
contractCode: string;
@Column()
contractName: string;
@Column({ name: 'project_id' })
projectId: number; // INT FK — internal, not exposed if marked @Exclude in UuidBaseEntity
}
```
**`UuidBaseEntity` (shared base):**
```typescript
import { PrimaryGeneratedColumn, Column, CreateDateColumn, UpdateDateColumn } from 'typeorm';
import { Exclude } from 'class-transformer';
export abstract class UuidBaseEntity {
@PrimaryGeneratedColumn()
@Exclude() // ❗ CRITICAL: INT id must never leak to API
id: number;
@Column({ type: 'uuid', unique: true, generated: 'uuid' })
publicId: string; // UUIDv7, exposed as-is
@CreateDateColumn()
createdAt: Date;
@UpdateDateColumn()
updatedAt: Date;
}
```
### DTO Pattern (Accept UUID, Resolve to INT Internally)
```typescript
// dto/create-contract.dto.ts
import { IsUUID, IsNotEmpty } from 'class-validator';
export class CreateContractDto {
@IsNotEmpty()
@IsUUID('7') // UUIDv7 (MariaDB native)
projectUuid: string; // Accept UUID from client
@IsNotEmpty()
contractCode: string;
@IsNotEmpty()
contractName: string;
}
// ❌ NO Response DTO with @Expose rename needed.
// Entity class_transformer via TransformInterceptor will serialize publicId directly.
```
### Service/Controller Pattern
```typescript
@Controller('contracts')
@UseGuards(JwtAuthGuard, CaslAbilityGuard)
export class ContractsController {
constructor(
private contractsService: ContractsService,
private uuidResolver: UuidResolver
) {}
@Post()
async create(@Body() dto: CreateContractDto) {
// Resolve UUID → INT PK for FK relationship
const projectId = await this.uuidResolver.resolveProject(dto.projectUuid);
const contract = await this.contractsService.create({
...dto,
projectId,
});
// Response: TransformInterceptor + @Exclude on id → publicId exposed directly
return contract;
}
@Get(':publicId')
async findOne(@Param('publicId', ParseUuidPipe) publicId: string) {
return this.contractsService.findOneByPublicId(publicId);
}
}
```
### UUID Resolver Helper
```typescript
@Injectable()
export class UuidResolver {
constructor(
@InjectRepository(Project)
private projectRepo: Repository<Project>,
@InjectRepository(Contract)
private contractRepo: Repository<Contract>
) {}
async resolveProject(publicId: string): Promise<number> {
const project = await this.projectRepo.findOne({
where: { publicId },
select: ['id'], // Only INT PK for FK
});
if (!project) throw new NotFoundException('Project not found');
return project.id;
}
async resolveContract(publicId: string): Promise<number> {
const contract = await this.contractRepo.findOne({
where: { publicId },
select: ['id'],
});
if (!contract) throw new NotFoundException('Contract not found');
return contract.id;
}
}
```
### TransformInterceptor (Required — register ONCE)
```typescript
// Register via APP_INTERCEPTOR in CommonModule — ห้ามซ้ำใน main.ts
@Injectable()
export class TransformInterceptor implements NestInterceptor {
intercept(context: ExecutionContext, next: CallHandler): Observable<any> {
return next.handle().pipe(
map((data) => instanceToPlain(data)) // Applies @Exclude / @Expose
);
}
}
// common.module.ts
@Module({
providers: [
{
provide: APP_INTERCEPTOR,
useClass: TransformInterceptor,
},
],
})
export class CommonModule {}
```
> **Warning:** ห้ามเรียก `app.useGlobalInterceptors(new TransformInterceptor())` ใน `main.ts` ซ้ำ — จะทำให้ response double-wrap `{ data: { data: ... } }`.
### Critical: NEVER ParseInt on UUID
```typescript
// ❌ WRONG - parseInt on UUID gives garbage value
const id = parseInt(projectPublicId); // "0195a1b2-..." → 195 (wrong!)
// ❌ WRONG - Number() on UUID
const id = Number(projectPublicId); // NaN
// ❌ WRONG - Unary plus on UUID
const id = +projectPublicId; // NaN
// ✅ CORRECT - Resolve via database lookup
const projectId = await uuidResolver.resolveProject(projectPublicId);
// ✅ CORRECT - Use TypeORM find with publicId column
const project = await projectRepo.findOne({ where: { publicId: projectPublicId } });
const id = project.id; // Get INT PK from entity
```
### Query with publicId (No Resolution Needed)
```typescript
// Direct UUID lookup in TypeORM
const project = await this.projectRepo.findOne({
where: { publicId: projectPublicId },
});
// Relations use INT FK internally
const contracts = await this.contractRepo.find({
where: { projectId: project.id }, // INT for FK query
});
```
### Reference
- [ADR-019 Hybrid Identifier Strategy](../../../../specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md)
- [UUID Implementation Plan](../../../../specs/05-Engineering-Guidelines/05-07-hybrid-uuid-implementation-plan.md)
- [Data Dictionary](../../../../specs/03-Data-and-Storage/03-01-data-dictionary.md)
> **Warning**: Using `parseInt()`, `Number()`, or unary `+` on UUID values violates ADR-019 and will cause data corruption. Always resolve UUIDs via database lookup.
@@ -1,100 +0,0 @@
---
title: No TypeORM Migrations (ADR-009)
impact: CRITICAL
impactDescription: Edit SQL schema files directly; n8n handles data migration. Do not generate TypeORM migration files.
tags: database, schema, migration, adr-009, sql, n8n
---
## No TypeORM Migrations (ADR-009)
**This project does NOT use TypeORM migration files.**
All schema changes must be made **directly** in the canonical SQL file:
- `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
Delta scripts (for incremental rollout to existing environments) go under:
- `specs/03-Data-and-Storage/deltas/YYYY-MM-DD-descriptive-name.sql`
Data migration (e.g., backfilling a new column) is handled by **n8n workflows**, not TypeORM's `QueryRunner`.
---
## Why No Migrations?
1. **Single source of truth** — The full SQL schema is always readable as one file. No need to replay a migration chain to understand current state.
2. **Review friendly** — Schema diff = git diff on the SQL file. Reviewers see the complete picture.
3. **Ops alignment** — DBAs and operators work in SQL, not TypeScript.
4. **n8n for data** — Business-meaningful data transforms live in n8n where they can be versioned, retried, and orchestrated with monitoring.
---
## ✅ Workflow for a Schema Change
1. **Update Data Dictionary** first:
- `specs/03-Data-and-Storage/03-01-data-dictionary.md` — add field meaning + business rules.
2. **Update the canonical schema**:
- Edit `lcbp3-v1.8.0-schema-02-tables.sql` — add/alter column, constraint, index.
3. **Add a delta script** (if deploying to existing env):
- `specs/03-Data-and-Storage/deltas/2026-04-22-add-rfa-revision-column.sql`
```sql
-- Delta: Add revision column to rfa table
ALTER TABLE rfa
ADD COLUMN revision INT NOT NULL DEFAULT 1 AFTER status;
CREATE INDEX idx_rfa_revision ON rfa(revision);
```
4. **Update the Entity** (`backend/src/.../entities/rfa.entity.ts`):
```typescript
@Column({ type: 'int', default: 1 })
revision: number;
```
5. **If data backfill needed** → create n8n workflow, not TypeScript migration.
---
## ❌ Forbidden
```bash
# ❌ DO NOT generate migrations
pnpm typeorm migration:generate ./src/migrations/AddRevision
# ❌ DO NOT run migrations
pnpm typeorm migration:run
```
```typescript
// ❌ DO NOT write migration classes
export class AddRevision1730000000000 implements MigrationInterface {
async up(queryRunner: QueryRunner): Promise<void> { /* ... */ }
async down(queryRunner: QueryRunner): Promise<void> { /* ... */ }
}
```
---
## ✅ TypeORM Config (runtime only)
```typescript
// ormconfig.ts
export default {
type: 'mariadb',
// ...
synchronize: false, // ❗ NEVER true (would auto-sync entity ↔ schema)
migrationsRun: false, // ❗ NEVER true
// ❌ Do NOT specify `migrations:` entries
};
```
`synchronize: false` is mandatory because the canonical SQL file is authoritative — TypeORM should never mutate the schema.
---
## Reference
- [ADR-009 Database Migration Strategy](../../../../specs/06-Decision-Records/ADR-009-database-migration-strategy.md)
- [Data Dictionary](../../../../specs/03-Data-and-Storage/03-01-data-dictionary.md)
- [Schema Tables](../../../../specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql)
@@ -1,128 +1,129 @@
--- ---
title: No TypeORM Migrations (ADR-009) title: Use Database Migrations
impact: HIGH impact: HIGH
impactDescription: Use direct SQL schema files instead of TypeORM migrations per project ADR impactDescription: Enables safe, repeatable database schema changes
tags: database, schema, typeorm, migrations, adr-009 tags: database, migrations, typeorm, schema
--- ---
## No TypeORM Migrations (ADR-009) ## Use Database Migrations
**This project follows ADR-009: Direct SQL Schema Management** Never use `synchronize: true` in production. Use migrations for all schema changes. Migrations provide version control for your database, enable safe rollbacks, and ensure consistency across all environments.
Unlike standard NestJS/TypeORM practices, this project does **NOT** use TypeORM migrations. Instead, we manage database schema through direct SQL files. **Incorrect (using synchronize or manual SQL):**
### Why No Migrations?
- **ADR-009 Decision**: Explicit schema control over auto-generated migrations
- **MariaDB-specific features**: Native UUID type, virtual columns, custom indexing
- **Team workflow**: Schema changes reviewed as SQL, not TypeORM migration classes
- **Audit trail**: Single source of truth in `specs/03-Data-and-Storage/`
### Schema File Locations
```
specs/03-Data-and-Storage/
├── lcbp3-v1.8.0-schema-01-drop.sql # Drop statements (dev only)
├── lcbp3-v1.8.0-schema-02-tables.sql # CREATE TABLE statements
├── lcbp3-v1.8.0-schema-03-views-indexes.sql # Views, indexes, constraints
└── deltas/ # Incremental changes
├── 01-add-reference-date.sql
├── 02-add-rbac-bulk-permission.sql
└── 03-fix-numbering-enums.sql
```
### Correct: Using SQL Schema Files
```typescript ```typescript
// TypeORM configuration - NO migrationsRun // Use synchronize in production
TypeOrmModule.forRoot({
type: 'postgres',
synchronize: true, // DANGEROUS in production!
// Can drop columns, tables, or data
});
// Manual SQL in production
@Injectable()
export class DatabaseService {
async addColumn(): Promise<void> {
await this.dataSource.query('ALTER TABLE users ADD COLUMN age INT');
// No version control, no rollback, inconsistent across envs
}
}
// Modify entities without migration
@Entity()
export class User {
@Column()
email: string;
@Column() // Added without migration
newField: string; // Will crash in production if synchronize is false
}
```
**Correct (use migrations for all schema changes):**
```typescript
// Configure TypeORM for migrations
// data-source.ts
export const dataSource = new DataSource({
type: 'postgres',
host: process.env.DB_HOST,
port: parseInt(process.env.DB_PORT),
username: process.env.DB_USERNAME,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
entities: ['dist/**/*.entity.js'],
migrations: ['dist/migrations/*.js'],
synchronize: false, // Always false in production
migrationsRun: true, // Run migrations on startup
});
// app.module.ts
TypeOrmModule.forRootAsync({ TypeOrmModule.forRootAsync({
inject: [ConfigService], inject: [ConfigService],
useFactory: (config: ConfigService) => ({ useFactory: (config: ConfigService) => ({
type: 'mariadb', type: 'postgres',
host: config.get('DB_HOST'), host: config.get('DB_HOST'),
port: config.get('DB_PORT'), synchronize: config.get('NODE_ENV') === 'development', // Only in dev
username: config.get('DB_USERNAME'), migrations: ['dist/migrations/*.js'],
password: config.get('DB_PASSWORD'), migrationsRun: true,
database: config.get('DB_NAME'),
entities: ['dist/**/*.entity.js'],
synchronize: false, // NEVER true, even in development
migrationsRun: false, // Disabled per ADR-009
// Migrations are managed via SQL files, not TypeORM
}), }),
}); });
```
### Schema Change Process (ADR-009)
1. **Modify SQL file directly**:
```sql
-- specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql
ALTER TABLE correspondences
ADD COLUMN priority VARCHAR(20) DEFAULT 'normal';
```
2. **Create delta for existing databases**:
```sql
-- specs/03-Data-and-Storage/deltas/04-add-priority-column.sql
ALTER TABLE correspondences
ADD COLUMN priority VARCHAR(20) DEFAULT 'normal';
```
3. **Apply to database manually or via deployment script**:
```bash
mysql -u root -p lcbp3 < specs/03-Data-and-Storage/deltas/04-add-priority-column.sql
```
### Entity Definition (No Migration Needed)
```typescript
@Entity('correspondences')
export class Correspondence {
@PrimaryGeneratedColumn()
id: number; // Internal INT PK
@Column({ type: 'uuid' })
uuid: string; // Public UUID
@Column({ name: 'priority', default: 'normal' })
priority: string;
// No migration class needed - schema managed via SQL
}
```
### Anti-Pattern: TypeORM Migrations (Do NOT Use)
```typescript
// ❌ WRONG - Do not create migration files
// migrations/1705312800000-AddUserAge.ts // migrations/1705312800000-AddUserAge.ts
import { MigrationInterface, QueryRunner } from 'typeorm';
export class AddUserAge1705312800000 implements MigrationInterface { export class AddUserAge1705312800000 implements MigrationInterface {
name = 'AddUserAge1705312800000';
public async up(queryRunner: QueryRunner): Promise<void> { public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`ALTER TABLE "users" ADD "age" integer`); // Add column with default to handle existing rows
await queryRunner.query(`
ALTER TABLE "users" ADD "age" integer DEFAULT 0
`);
// Add index for frequently queried columns
await queryRunner.query(`
CREATE INDEX "IDX_users_age" ON "users" ("age")
`);
}
public async down(queryRunner: QueryRunner): Promise<void> {
// Always implement down for rollback
await queryRunner.query(`DROP INDEX "IDX_users_age"`);
await queryRunner.query(`ALTER TABLE "users" DROP COLUMN "age"`);
} }
} }
// ❌ WRONG - Do not enable migrationsRun // Safe column rename (two-step)
TypeOrmModule.forRoot({ export class RenameNameToFullName1705312900000 implements MigrationInterface {
migrationsRun: true, // Disabled per ADR-009 public async up(queryRunner: QueryRunner): Promise<void> {
migrations: ['dist/migrations/*.js'], // Step 1: Add new column
}); await queryRunner.query(`
ALTER TABLE "users" ADD "full_name" varchar(255)
`);
// Step 2: Copy data
await queryRunner.query(`
UPDATE "users" SET "full_name" = "name"
`);
// Step 3: Add NOT NULL constraint
await queryRunner.query(`
ALTER TABLE "users" ALTER COLUMN "full_name" SET NOT NULL
`);
// Step 4: Drop old column (after verifying app works)
await queryRunner.query(`
ALTER TABLE "users" DROP COLUMN "name"
`);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`ALTER TABLE "users" ADD "name" varchar(255)`);
await queryRunner.query(`UPDATE "users" SET "name" = "full_name"`);
await queryRunner.query(`ALTER TABLE "users" DROP COLUMN "full_name"`);
}
}
``` ```
### When You Need Schema Changes Reference: [TypeORM Migrations](https://typeorm.io/migrations)
1. Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
2. Add your DDL to the appropriate SQL file
3. Create delta file in `deltas/` directory
4. Apply SQL to your database
5. Update corresponding Entity class
### Reference
- [ADR-009 Database Strategy](../../../../specs/06-Decision-Records/ADR-009-db-strategy.md)
- [Schema SQL Files](../../../../specs/03-Data-and-Storage/)
- [Data Dictionary](../../../../specs/03-Data-and-Storage/03-01-data-dictionary.md)
> **Warning**: Attempting to use TypeORM migrations in this project violates ADR-009 and will be rejected in code review.
@@ -47,7 +47,12 @@ export class OrdersService {
for (const item of items) { for (const item of items) {
await manager.save(OrderItem, { orderId: order.id, ...item }); await manager.save(OrderItem, { orderId: order.id, ...item });
await manager.decrement(Inventory, { productId: item.productId }, 'stock', item.quantity); await manager.decrement(
Inventory,
{ productId: item.productId },
'stock',
item.quantity,
);
} }
// If this throws, everything rolls back // If this throws, everything rolls back
@@ -70,7 +75,12 @@ export class TransferService {
try { try {
// Debit source account // Debit source account
await queryRunner.manager.decrement(Account, { id: fromId }, 'balance', amount); await queryRunner.manager.decrement(
Account,
{ id: fromId },
'balance',
amount,
);
// Verify sufficient funds // Verify sufficient funds
const source = await queryRunner.manager.findOne(Account, { const source = await queryRunner.manager.findOne(Account, {
@@ -81,7 +91,12 @@ export class TransferService {
} }
// Credit destination account // Credit destination account
await queryRunner.manager.increment(Account, { id: toId }, 'balance', amount); await queryRunner.manager.increment(
Account,
{ id: toId },
'balance',
amount,
);
// Log the transaction // Log the transaction
await queryRunner.manager.save(TransactionLog, { await queryRunner.manager.save(TransactionLog, {
@@ -106,10 +121,13 @@ export class TransferService {
export class UsersRepository { export class UsersRepository {
constructor( constructor(
@InjectRepository(User) private repo: Repository<User>, @InjectRepository(User) private repo: Repository<User>,
private dataSource: DataSource private dataSource: DataSource,
) {} ) {}
async createWithProfile(userData: CreateUserDto, profileData: CreateProfileDto): Promise<User> { async createWithProfile(
userData: CreateUserDto,
profileData: CreateProfileDto,
): Promise<User> {
return this.dataSource.transaction(async (manager) => { return this.dataSource.transaction(async (manager) => {
const user = await manager.save(User, userData); const user = await manager.save(User, userData);
await manager.save(Profile, { ...profileData, userId: user.id }); await manager.save(Profile, { ...profileData, userId: user.id });
@@ -79,7 +79,9 @@ export class DatabaseService implements OnApplicationShutdown {
console.log(`Database service shutting down on ${signal}`); console.log(`Database service shutting down on ${signal}`);
// Close all connections gracefully // Close all connections gracefully
await Promise.all(this.connections.map((conn) => conn.close())); await Promise.all(
this.connections.map((conn) => conn.close()),
);
console.log('All database connections closed'); console.log('All database connections closed');
} }
@@ -148,7 +150,9 @@ export class HealthController {
throw new ServiceUnavailableException('Shutting down'); throw new ServiceUnavailableException('Shutting down');
} }
return this.health.check([() => this.db.pingCheck('database')]); return this.health.check([
() => this.db.pingCheck('database'),
]);
} }
} }
@@ -204,7 +208,10 @@ export class RequestTracker implements NestMiddleware, OnApplicationShutdown {
}); });
// Wait with timeout // Wait with timeout
await Promise.race([this.shutdownPromise, new Promise((resolve) => setTimeout(resolve, 30000))]); await Promise.race([
this.shutdownPromise,
new Promise((resolve) => setTimeout(resolve, 30000)),
]);
} }
console.log('All requests completed'); console.log('All requests completed');
@@ -61,7 +61,9 @@ export const appConfig = registerAs('app', () => ({
// config/validation.schema.ts // config/validation.schema.ts
export const validationSchema = Joi.object({ export const validationSchema = Joi.object({
NODE_ENV: Joi.string().valid('development', 'production', 'test').default('development'), NODE_ENV: Joi.string()
.valid('development', 'production', 'test')
.default('development'),
PORT: Joi.number().default(3000), PORT: Joi.number().default(3000),
DB_HOST: Joi.string().required(), DB_HOST: Joi.string().required(),
DB_PORT: Joi.number().default(5432), DB_PORT: Joi.number().default(5432),
@@ -135,7 +137,7 @@ export class AppService {
export class DatabaseService { export class DatabaseService {
constructor( constructor(
@Inject(databaseConfig.KEY) @Inject(databaseConfig.KEY)
private dbConfig: ConfigType<typeof databaseConfig> private dbConfig: ConfigType<typeof databaseConfig>,
) { ) {
// Full type inference! // Full type inference!
const host = this.dbConfig.host; // string const host = this.dbConfig.host; // string
@@ -145,7 +147,12 @@ export class DatabaseService {
// Environment files support // Environment files support
ConfigModule.forRoot({ ConfigModule.forRoot({
envFilePath: [`.env.${process.env.NODE_ENV}.local`, `.env.${process.env.NODE_ENV}`, '.env.local', '.env'], envFilePath: [
`.env.${process.env.NODE_ENV}.local`,
`.env.${process.env.NODE_ENV}`,
'.env.local',
'.env',
],
}); });
// .env.development // .env.development
@@ -45,7 +45,9 @@ logger.log('User ' + userId + ' created at ' + new Date());
async function bootstrap() { async function bootstrap() {
const app = await NestFactory.create(AppModule, { const app = await NestFactory.create(AppModule, {
logger: logger:
process.env.NODE_ENV === 'production' ? ['error', 'warn', 'log'] : ['error', 'warn', 'log', 'debug', 'verbose'], process.env.NODE_ENV === 'production'
? ['error', 'warn', 'log']
: ['error', 'warn', 'log', 'debug', 'verbose'],
}); });
} }
@@ -80,7 +82,7 @@ export class JsonLogger implements LoggerService {
timestamp: new Date().toISOString(), timestamp: new Date().toISOString(),
message, message,
...context, ...context,
}) }),
); );
} }
@@ -92,7 +94,7 @@ export class JsonLogger implements LoggerService {
message, message,
trace, trace,
...context, ...context,
}) }),
); );
} }
@@ -103,7 +105,7 @@ export class JsonLogger implements LoggerService {
timestamp: new Date().toISOString(), timestamp: new Date().toISOString(),
message, message,
...context, ...context,
}) }),
); );
} }
@@ -114,7 +116,7 @@ export class JsonLogger implements LoggerService {
timestamp: new Date().toISOString(), timestamp: new Date().toISOString(),
message, message,
...context, ...context,
}) }),
); );
} }
} }
@@ -164,7 +166,7 @@ export class ContextLogger {
userId: this.cls.get('userId'), userId: this.cls.get('userId'),
message, message,
...data, ...data,
}) }),
); );
} }
@@ -179,7 +181,7 @@ export class ContextLogger {
error: error.message, error: error.message,
stack: error.stack, stack: error.stack,
...data, ...data,
}) }),
); );
} }
} }
@@ -192,7 +194,10 @@ import { LoggerModule } from 'nestjs-pino';
LoggerModule.forRoot({ LoggerModule.forRoot({
pinoHttp: { pinoHttp: {
level: process.env.NODE_ENV === 'production' ? 'info' : 'debug', level: process.env.NODE_ENV === 'production' ? 'info' : 'debug',
transport: process.env.NODE_ENV !== 'production' ? { target: 'pino-pretty' } : undefined, transport:
process.env.NODE_ENV !== 'production'
? { target: 'pino-pretty' }
: undefined,
redact: ['req.headers.authorization', 'req.body.password'], redact: ['req.headers.authorization', 'req.body.password'],
serializers: { serializers: {
req: (req) => ({ req: (req) => ({
@@ -55,7 +55,7 @@ export class OrdersService {
constructor( constructor(
private usersService: UsersService, private usersService: UsersService,
private inventoryService: InventoryService, private inventoryService: InventoryService,
private paymentService: PaymentService private paymentService: PaymentService,
) {} ) {}
async createOrder(dto: CreateOrderDto): Promise<Order> { async createOrder(dto: CreateOrderDto): Promise<Order> {
@@ -28,14 +28,14 @@ interface NotificationService {
@Injectable() @Injectable()
export class OrdersService { export class OrdersService {
constructor( constructor(
private notifications: NotificationService // Depends on 8 methods, uses 1 private notifications: NotificationService, // Depends on 8 methods, uses 1
) {} ) {}
async confirmOrder(order: Order): Promise<void> { async confirmOrder(order: Order): Promise<void> {
await this.notifications.sendEmail( await this.notifications.sendEmail(
order.customer.email, order.customer.email,
'Order Confirmed', 'Order Confirmed',
`Your order ${order.id} has been confirmed.` `Your order ${order.id} has been confirmed.`,
); );
} }
} }
@@ -105,14 +105,14 @@ export class SendGridEmailService implements EmailSender {
@Injectable() @Injectable()
export class OrdersService { export class OrdersService {
constructor( constructor(
@Inject(EMAIL_SENDER) private emailSender: EmailSender // Minimal dependency @Inject(EMAIL_SENDER) private emailSender: EmailSender, // Minimal dependency
) {} ) {}
async confirmOrder(order: Order): Promise<void> { async confirmOrder(order: Order): Promise<void> {
await this.emailSender.sendEmail( await this.emailSender.sendEmail(
order.customer.email, order.customer.email,
'Order Confirmed', 'Order Confirmed',
`Your order ${order.id} has been confirmed.` `Your order ${order.id} has been confirmed.`,
); );
} }
} }
@@ -150,7 +150,7 @@ type MultiChannelSender = EmailSender & SmsSender & PushSender;
export class AlertService { export class AlertService {
constructor( constructor(
@Inject(MULTI_CHANNEL_SENDER) @Inject(MULTI_CHANNEL_SENDER)
private sender: EmailSender & SmsSender private sender: EmailSender & SmsSender,
) {} ) {}
async sendCriticalAlert(user: User, message: string): Promise<void> { async sendCriticalAlert(user: User, message: string): Promise<void> {
@@ -178,7 +178,9 @@ export class OrdersService {
```typescript ```typescript
// Shared test suite that any implementation must pass // Shared test suite that any implementation must pass
function testPaymentGatewayContract(createGateway: () => PaymentGateway) { function testPaymentGatewayContract(
createGateway: () => PaymentGateway,
) {
describe('PaymentGateway contract', () => { describe('PaymentGateway contract', () => {
let gateway: PaymentGateway; let gateway: PaymentGateway;
@@ -195,11 +197,13 @@ function testPaymentGatewayContract(createGateway: () => PaymentGateway) {
}); });
it('throws InvalidCurrencyException for unsupported currency', async () => { it('throws InvalidCurrencyException for unsupported currency', async () => {
await expect(gateway.charge(1000, 'INVALID')).rejects.toThrow(InvalidCurrencyException); await expect(gateway.charge(1000, 'INVALID'))
.rejects.toThrow(InvalidCurrencyException);
}); });
it('throws TransactionNotFoundException for invalid refund', async () => { it('throws TransactionNotFoundException for invalid refund', async () => {
await expect(gateway.refund('nonexistent')).rejects.toThrow(TransactionNotFoundException); await expect(gateway.refund('nonexistent'))
.rejects.toThrow(TransactionNotFoundException);
}); });
}); });
} }
@@ -40,7 +40,7 @@ export class UsersService {
export class UsersService { export class UsersService {
constructor( constructor(
private readonly userRepo: UserRepository, private readonly userRepo: UserRepository,
@Inject('CONFIG') private readonly config: ConfigType @Inject('CONFIG') private readonly config: ConfigType,
) {} ) {}
async findAll(): Promise<User[]> { async findAll(): Promise<User[]> {
@@ -19,9 +19,7 @@ interface PaymentGateway {
@Injectable() @Injectable()
export class StripeService implements PaymentGateway { export class StripeService implements PaymentGateway {
charge(amount: number) { charge(amount: number) { /* ... */ }
/* ... */
}
} }
@Injectable() @Injectable()
@@ -60,7 +58,9 @@ export class MockPaymentService implements PaymentGateway {
providers: [ providers: [
{ {
provide: PAYMENT_GATEWAY, provide: PAYMENT_GATEWAY,
useClass: process.env.NODE_ENV === 'test' ? MockPaymentService : StripeService, useClass: process.env.NODE_ENV === 'test'
? MockPaymentService
: StripeService,
}, },
], ],
exports: [PAYMENT_GATEWAY], exports: [PAYMENT_GATEWAY],
@@ -70,7 +70,9 @@ export class PaymentModule {}
// Injection // Injection
@Injectable() @Injectable()
export class OrdersService { export class OrdersService {
constructor(@Inject(PAYMENT_GATEWAY) private payment: PaymentGateway) {} constructor(
@Inject(PAYMENT_GATEWAY) private payment: PaymentGateway,
) {}
async createOrder(dto: CreateOrderDto) { async createOrder(dto: CreateOrderDto) {
await this.payment.charge(dto.amount); await this.payment.charge(dto.amount);
@@ -88,7 +88,7 @@ export class UsersController {
export class EntityNotFoundException extends Error { export class EntityNotFoundException extends Error {
constructor( constructor(
public readonly entity: string, public readonly entity: string,
public readonly id: string public readonly id: string,
) { ) {
super(`${entity} with ID "${id}" not found`); super(`${entity} with ID "${id}" not found`);
} }
@@ -95,11 +95,20 @@ export class AllExceptionsFilter implements ExceptionFilter {
const response = ctx.getResponse<Response>(); const response = ctx.getResponse<Response>();
const request = ctx.getRequest<Request>(); const request = ctx.getRequest<Request>();
const status = exception instanceof HttpException ? exception.getStatus() : HttpStatus.INTERNAL_SERVER_ERROR; const status =
exception instanceof HttpException
? exception.getStatus()
: HttpStatus.INTERNAL_SERVER_ERROR;
const message = exception instanceof HttpException ? exception.message : 'Internal server error'; const message =
exception instanceof HttpException
? exception.message
: 'Internal server error';
this.logger.error(`${request.method} ${request.url}`, exception instanceof Error ? exception.stack : exception); this.logger.error(
`${request.method} ${request.url}`,
exception instanceof Error ? exception.stack : exception,
);
response.status(status).json({ response.status(status).json({
statusCode: status, statusCode: status,
@@ -111,7 +120,10 @@ export class AllExceptionsFilter implements ExceptionFilter {
} }
// Register globally in main.ts // Register globally in main.ts
app.useGlobalFilters(new AllExceptionsFilter(app.get(Logger)), new DomainExceptionFilter()); app.useGlobalFilters(
new AllExceptionsFilter(app.get(Logger)),
new DomainExceptionFilter(),
);
// Or via module // Or via module
@Module({ @Module({
@@ -1,157 +0,0 @@
---
title: AI Integration Boundary (ADR-018 / ADR-020)
impact: CRITICAL
impactDescription: AI runs on Admin Desktop only; AI → DMS API → DB (never direct); human-in-the-loop validation mandatory; full audit trail.
tags: ai, ollama, boundary, adr-018, adr-020, privacy, audit
---
## AI Integration Boundary
LCBP3 uses **on-premises AI only** (Ollama on Admin Desktop) with strict isolation from data layers.
---
## The Boundary
```
┌────────────────────────────────────────────────────────────┐
│ User Browser (Next.js) │
└─────────────────────────┬──────────────────────────────────┘
│ (authenticated HTTPS)
┌─────────────────────────▼──────────────────────────────────┐
│ DMS API (NestJS) ◀── enforces CASL, validation, audit │
│ ├─ AiGateway (proxies to Ollama) │
│ └─ DB + Storage (Elasticsearch, MariaDB, File System) │
└─────────────────────────┬──────────────────────────────────┘
│ (HTTP → Admin Desktop, internal)
┌─────────────────────────▼──────────────────────────────────┐
│ Admin Desktop (Desk-5439) │
│ ├─ Ollama (Gemma 4) │
│ ├─ PaddleOCR (Thai + English) │
│ └─ n8n orchestration │
└────────────────────────────────────────────────────────────┘
```
**❗ Admin Desktop has NO network access to MariaDB, no SMB to storage, no shared secrets.** It receives base64-encoded file bytes over HTTPS and returns extracted text + suggestions.
---
## Required Patterns
### 1. AiGateway Module (backend)
```typescript
@Module({
controllers: [AiController],
providers: [AiService, AiGateway, AiAuditLogger],
exports: [AiService],
})
export class AiModule {}
@Injectable()
export class AiService {
async extractMetadata(fileId: number, user: User): Promise<ExtractedMetadata> {
// 1. Authorize (CASL: user can read this file)
await this.ability.ensureCan(user, 'read', File, fileId);
// 2. Load file (DMS API, inside the boundary)
const fileBytes = await this.storageService.read(fileId);
// 3. Call Admin Desktop AI over HTTP
const raw = await this.aiGateway.extract(fileBytes);
// 4. Validate AI output schema (Zod)
const parsed = ExtractedMetadataSchema.parse(raw);
// 5. Audit log (who, what, when, model, confidence)
await this.auditLogger.log({
userId: user.id,
action: 'ai.extract_metadata',
fileId,
model: raw.model,
confidence: parsed.confidence,
});
// 6. Return — frontend MUST render for human confirmation
return parsed;
}
}
```
### 2. Human-in-the-Loop
AI output is **never persisted directly**. Users must confirm via `DocumentReviewForm`:
```tsx
<DocumentReviewForm
document={doc}
aiSuggestions={suggestions}
onConfirm={(reviewed) => saveMetadata(reviewed)} // user edits applied
/>
```
The `user_confirmed_at` timestamp and diff (AI suggestion → final value) are stored in the audit log.
### 3. Rate Limiting
```typescript
@Post('ai/extract')
@UseGuards(JwtAuthGuard, CaslAbilityGuard, ThrottlerGuard)
@Throttle({ default: { limit: 10, ttl: 60_000 } }) // 10 req/min/user
async extract(@Body() dto: ExtractDto) { /* ... */ }
```
---
## ❌ Forbidden
```typescript
// ❌ AI container connecting to DB
// docker-compose.yml inside ai-service:
// environment:
// DATABASE_URL: mysql://... ← NEVER
// ❌ AI SDK calling cloud API
import OpenAI from 'openai'; // ❌ No cloud AI SDKs in production code
const client = new OpenAI({ apiKey: ... });
// ❌ Persisting AI output without human confirm
async extractAndSave(fileId: number) {
const metadata = await this.ai.extract(fileId);
await this.repo.save({ fileId, ...metadata }); // ❌ skips human review
}
// ❌ Skipping audit log
const result = await this.aiGateway.extract(bytes); // no logging
return result;
```
---
## Audit Log Schema
```sql
CREATE TABLE ai_audit_log (
id INT AUTO_INCREMENT PRIMARY KEY,
public_id UUID UNIQUE NOT NULL,
user_id INT NOT NULL,
action VARCHAR(64) NOT NULL, -- 'ai.extract_metadata', 'ai.classify', etc.
file_id INT,
model VARCHAR(64), -- 'gemma-4:7b', 'paddleocr-v3'
confidence DECIMAL(4,3),
input_hash CHAR(64), -- SHA-256 of input for replay detection
output_summary JSON,
human_confirmed_at DATETIME,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
INDEX idx_user_created (user_id, created_at),
INDEX idx_file (file_id)
);
```
---
## Reference
- [ADR-018 AI Boundary](../../../../specs/06-Decision-Records/ADR-018-ai-boundary.md)
- [ADR-020 AI Intelligence Integration](../../../../specs/06-Decision-Records/ADR-020-ai-intelligence-integration.md)
- [ADR-017 Ollama Data Migration](../../../../specs/06-Decision-Records/ADR-017-ollama-data-migration.md)
@@ -1,181 +0,0 @@
---
title: Workflow Engine + Document Numbering + Workflow Context (ADR-001 / 002 / 021)
impact: CRITICAL
impactDescription: DSL-based state machine; double-lock numbering; integrated workflow context exposed to clients.
tags: workflow, numbering, redlock, version-column, adr-001, adr-002, adr-021
---
## Workflow Engine + Numbering + Context
LCBP3 uses a **unified workflow engine** (DSL-based state machine) across RFA, Transmittal, Correspondence, Circulation, and Shop Drawing. Every state transition goes through the same engine — no per-type routing tables.
---
## ADR-001: Unified Workflow Engine
### State Transition Pattern
```typescript
@Injectable()
export class WorkflowEngine {
async transition(
instanceId: string,
action: WorkflowAction,
actor: User,
context?: WorkflowContext,
): Promise<WorkflowInstance> {
// 1. Load current state from DB (never trust client-provided state)
const instance = await this.repo.findOneByPublicId(instanceId);
if (!instance) throw new NotFoundException();
// 2. Validate transition against DSL
const dsl = await this.dslService.load(instance.workflowTypeId);
const nextState = dsl.resolve(instance.currentState, action);
if (!nextState) {
throw new BusinessException(
`Action ${action} not allowed from state ${instance.currentState}`,
'ไม่สามารถดำเนินการนี้ได้ในสถานะปัจจุบัน',
'กรุณาตรวจสอบขั้นตอนการอนุมัติ',
'WF_INVALID_TRANSITION',
);
}
// 3. Apply transition atomically (optimistic lock via @VersionColumn)
instance.currentState = nextState;
await this.repo.save(instance); // throws OptimisticLockVersionMismatchError on race
// 4. Emit event for listeners (notifications via BullMQ — ADR-008)
this.eventBus.publish(new WorkflowTransitionedEvent(instance, action, actor));
return instance;
}
}
```
### ❌ Anti-Patterns
- ❌ Hard-coded `switch (state)` in controllers/services
- ❌ Trusting `currentState` from request body
- ❌ Creating separate routing tables per document type
---
## ADR-002: Document Numbering (Double-Lock)
Concurrent requests for a new document number **must** use both:
1. **Redis Redlock** — distributed lock across app instances
2. **TypeORM `@VersionColumn`** — optimistic lock on counter row
### Counter Entity
```typescript
@Entity('document_number_counters')
@Unique(['projectId', 'documentTypeId'])
export class DocumentNumberCounter extends UuidBaseEntity {
@Column({ name: 'project_id' })
projectId: number;
@Column({ name: 'document_type_id' })
documentTypeId: number;
@Column({ name: 'last_number', default: 0 })
lastNumber: number;
@VersionColumn()
version: number; // ❗ Optimistic lock — do not rename, do not remove
}
```
### Service Pattern
```typescript
@Injectable()
export class DocumentNumberingService {
constructor(
@InjectRepository(DocumentNumberCounter)
private counterRepo: Repository<DocumentNumberCounter>,
private redlock: RedlockService,
private readonly logger: Logger,
) {}
async generateNext(ctx: NumberingContext): Promise<string> {
const lockKey = `doc_num:${ctx.projectId}:${ctx.documentTypeId}`;
// Distributed lock — 3s TTL, up to 5 retries
const lock = await this.redlock.acquire([lockKey], 3000);
try {
// Optimistic lock via @VersionColumn
const counter = await this.counterRepo.findOne({
where: { projectId: ctx.projectId, documentTypeId: ctx.documentTypeId },
});
if (!counter) {
throw new NotFoundException('Counter not initialized for this project/type');
}
counter.lastNumber += 1;
await this.counterRepo.save(counter); // may throw OptimisticLockVersionMismatchError
return this.formatNumber(ctx, counter.lastNumber);
} catch (err) {
if (err instanceof OptimisticLockVersionMismatchError) {
this.logger.warn(`Numbering race detected for ${lockKey}, retrying`);
// Let caller retry via BullMQ retry policy
}
throw err;
} finally {
await lock.release();
}
}
private formatNumber(ctx: NumberingContext, seq: number): string {
// e.g. "LCBP3-RFA-0042"
return `${ctx.projectCode}-${ctx.typeCode}-${String(seq).padStart(4, '0')}`;
}
}
```
### ❌ Anti-Patterns
- ❌ App-side counter only (`let counter = 0; counter++`)
- ❌ Using `findOne` + `update` without `@VersionColumn`
- ❌ Using only Redis lock without DB optimistic lock (race if Redis fails)
---
## ADR-021: Integrated Workflow Context
Every workflow-aware API response **must** expose:
```typescript
export class WorkflowEnvelope<T> {
data: T;
workflow: {
instancePublicId: string;
currentState: string; // e.g. 'pending_review'
availableActions: string[]; // e.g. ['approve', 'reject', 'request-revision']
canEdit: boolean; // computed from CASL + current state
lastTransitionAt: string; // ISO 8601
};
stepAttachments?: Array<{ // files produced by the current/previous step
publicId: string;
fileName: string;
stepCode: string;
downloadUrl: string;
}>;
}
```
Frontend uses `workflow.availableActions` to render buttons — no client-side state machine logic.
---
## Reference
- [ADR-001 Unified Workflow Engine](../../../../specs/06-Decision-Records/ADR-001-unified-workflow-engine.md)
- [ADR-002 Document Numbering Strategy](../../../../specs/06-Decision-Records/ADR-002-document-numbering-strategy.md)
- [ADR-021 Workflow Context](../../../../specs/06-Decision-Records/ADR-021-workflow-context.md)
@@ -64,7 +64,11 @@ import { BullModule } from '@nestjs/bullmq';
}, },
}, },
}), }),
BullModule.registerQueue({ name: 'email' }, { name: 'reports' }, { name: 'notifications' }), BullModule.registerQueue(
{ name: 'email' },
{ name: 'reports' },
{ name: 'notifications' },
),
], ],
}) })
export class QueueModule {} export class QueueModule {}
@@ -72,7 +76,9 @@ export class QueueModule {}
// Producer: Add jobs to queue // Producer: Add jobs to queue
@Injectable() @Injectable()
export class ReportsService { export class ReportsService {
constructor(@InjectQueue('reports') private reportsQueue: Queue) {} constructor(
@InjectQueue('reports') private reportsQueue: Queue,
) {}
async requestReport(dto: GenerateReportDto): Promise<{ jobId: string }> { async requestReport(dto: GenerateReportDto): Promise<{ jobId: string }> {
// Return immediately, process in background // Return immediately, process in background
@@ -170,7 +176,7 @@ export class NotificationService {
{ {
attempts: 5, attempts: 5,
backoff: { type: 'exponential', delay: 5000 }, backoff: { type: 'exponential', delay: 5000 },
} },
); );
} }
} }
@@ -188,7 +194,7 @@ export class ScheduledJobsService implements OnModuleInit {
{ {
repeat: { cron: '0 0 * * *' }, repeat: { cron: '0 0 * * *' },
jobId: 'daily-cleanup', // Prevent duplicates jobId: 'daily-cleanup', // Prevent duplicates
} },
); );
// Send digest every hour // Send digest every hour
@@ -198,7 +204,7 @@ export class ScheduledJobsService implements OnModuleInit {
{ {
repeat: { every: 60 * 60 * 1000 }, repeat: { every: 60 * 60 * 1000 },
jobId: 'hourly-digest', jobId: 'hourly-digest',
} },
); );
} }
} }
@@ -64,7 +64,7 @@ export class DatabaseService implements OnModuleInit {
export class CacheWarmerService implements OnApplicationBootstrap { export class CacheWarmerService implements OnApplicationBootstrap {
constructor( constructor(
private cache: CacheService, private cache: CacheService,
private products: ProductsService private products: ProductsService,
) {} ) {}
async onApplicationBootstrap(): Promise<void> { async onApplicationBootstrap(): Promise<void> {
@@ -81,7 +81,10 @@ export class ModuleLoaderService {
constructor(private lazyModuleLoader: LazyModuleLoader) {} constructor(private lazyModuleLoader: LazyModuleLoader) {}
async load<T>(key: string, importFn: () => Promise<{ default: Type<T> } | Type<T>>): Promise<ModuleRef> { async load<T>(
key: string,
importFn: () => Promise<{ default: Type<T> } | Type<T>>,
): Promise<ModuleRef> {
if (!this.loadedModules.has(key)) { if (!this.loadedModules.has(key)) {
const module = await importFn(); const module = await importFn();
const moduleType = 'default' in module ? module.default : module; const moduleType = 'default' in module ? module.default : module;
@@ -51,7 +51,9 @@ export class UsersService {
imports: [ConfigModule], imports: [ConfigModule],
inject: [ConfigService], inject: [ConfigService],
useFactory: (config: ConfigService) => ({ useFactory: (config: ConfigService) => ({
stores: [new KeyvRedis(config.get('REDIS_URL'))], stores: [
new KeyvRedis(config.get('REDIS_URL')),
],
ttl: 60 * 1000, // Default 60s ttl: 60 * 1000, // Default 60s
}), }),
}), }),
@@ -64,7 +66,7 @@ export class AppModule {}
export class ProductsService { export class ProductsService {
constructor( constructor(
@Inject(CACHE_MANAGER) private cache: Cache, @Inject(CACHE_MANAGER) private cache: Cache,
private productsRepo: ProductRepository private productsRepo: ProductRepository,
) {} ) {}
async getPopular(): Promise<Product[]> { async getPopular(): Promise<Product[]> {
@@ -115,7 +117,10 @@ export class CacheInvalidationService {
@OnEvent('product.updated') @OnEvent('product.updated')
@OnEvent('product.deleted') @OnEvent('product.deleted')
async invalidateProductCaches(event: ProductEvent) { async invalidateProductCaches(event: ProductEvent) {
await Promise.all([this.cache.del('products:popular'), this.cache.del(`product:${event.productId}`)]); await Promise.all([
this.cache.del('products:popular'),
this.cache.del(`product:${event.productId}`),
]);
} }
} }
``` ```
@@ -111,7 +111,7 @@ export class AuthService {
export class JwtStrategy extends PassportStrategy(Strategy) { export class JwtStrategy extends PassportStrategy(Strategy) {
constructor( constructor(
private config: ConfigService, private config: ConfigService,
private usersService: UsersService private usersService: UsersService,
) { ) {
super({ super({
jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(), jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
@@ -1,137 +0,0 @@
---
title: Two-Phase File Upload + ClamAV (ADR-016)
impact: CRITICAL
impactDescription: Upload → Temp → ClamAV scan → Commit → Permanent. Whitelist + 50MB cap. StorageService only.
tags: file-upload, clamav, security, adr-016, storage
---
## Two-Phase File Upload (ADR-016)
**Never write uploaded files directly to permanent storage.** All uploads must go through:
```
Client → Upload endpoint → Temp storage → ClamAV scan → Commit endpoint → Permanent storage
```
---
## Constraints (non-negotiable)
| Rule | Value |
| --- | --- |
| Allowed MIME types | `application/pdf`, `image/vnd.dwg`, `application/vnd.openxmlformats-officedocument.wordprocessingml.document`, `application/vnd.openxmlformats-officedocument.spreadsheetml.sheet`, `application/zip` |
| Allowed extensions | `.pdf`, `.dwg`, `.docx`, `.xlsx`, `.zip` |
| Max size | 50 MB |
| Temp TTL | 24 h (purged by cron) |
| Virus scan | ClamAV (blocking) |
| Mover | `StorageService` only — never `fs.rename` directly from controller |
---
## Phase 1: Upload to Temp
```typescript
@Post('upload')
@UseGuards(JwtAuthGuard, ThrottlerGuard)
@UseInterceptors(FileInterceptor('file', {
limits: { fileSize: 50 * 1024 * 1024 }, // 50 MB
}))
async uploadTemp(
@UploadedFile() file: Express.Multer.File,
@CurrentUser() user: User,
): Promise<{ tempId: string; expiresAt: string }> {
// 1. Validate MIME + extension (defense in depth)
this.fileValidator.assertAllowed(file);
// 2. Scan with ClamAV
const scanResult = await this.clamavService.scan(file.buffer);
if (!scanResult.clean) {
throw new BusinessException(
`ClamAV rejected: ${scanResult.signature}`,
'ไฟล์ไม่ปลอดภัย ระบบตรวจพบความเสี่ยง',
'กรุณาตรวจสอบไฟล์และลองใหม่อีกครั้ง',
'FILE_INFECTED',
);
}
// 3. Save to temp (encrypted at rest)
const tempId = await this.storageService.saveToTemp(file, user.id);
return {
tempId,
expiresAt: addHours(new Date(), 24).toISOString(),
};
}
```
---
## Phase 2: Commit in Transaction
The business operation (e.g., creating a Correspondence) promotes temp files to permanent **in the same DB transaction**.
```typescript
async createCorrespondence(dto: CreateCorrespondenceDto, user: User) {
return this.dataSource.transaction(async (manager) => {
// 1. Create domain entity
const entity = await manager.save(Correspondence, {
...dto,
createdById: user.id,
});
// 2. Commit temp files → permanent (ACID together with entity)
await this.storageService.commitFiles(
dto.tempFileIds,
{ entityId: entity.id, entityType: 'correspondence' },
manager,
);
return entity;
});
}
```
If the transaction rolls back, temp files remain and expire in 24h — no orphaned permanent files.
---
## StorageService Contract
```typescript
export interface StorageService {
saveToTemp(file: Express.Multer.File, ownerId: number): Promise<string>;
commitFiles(
tempIds: string[],
target: { entityId: number; entityType: string },
manager: EntityManager,
): Promise<FileRecord[]>;
purgeExpiredTemp(): Promise<number>; // called by cron
getPermanentPath(fileId: number): Promise<string>;
}
```
---
## ❌ Forbidden
```typescript
// ❌ Direct write to permanent
fs.writeFileSync(`/var/storage/${file.originalname}`, file.buffer);
// ❌ Skip ClamAV
await this.storageService.savePermanent(file);
// ❌ Non-whitelist MIME
@UseInterceptors(FileInterceptor('file')) // no size or type limit
// ❌ Commit outside transaction
const entity = await this.repo.save(...);
await this.storageService.commitFiles(tempIds, ...); // race: entity exists, files may fail
```
---
## Reference
- [ADR-016 Security & Authentication](../../../../specs/06-Decision-Records/ADR-016-security-authentication.md)
- [Edge Cases](../../../../specs/01-Requirements/01-06-edge-cases-and-rules.md) — file upload scenarios
@@ -47,12 +47,15 @@ export class AdminController {
export class JwtAuthGuard implements CanActivate { export class JwtAuthGuard implements CanActivate {
constructor( constructor(
private jwtService: JwtService, private jwtService: JwtService,
private reflector: Reflector private reflector: Reflector,
) {} ) {}
async canActivate(context: ExecutionContext): Promise<boolean> { async canActivate(context: ExecutionContext): Promise<boolean> {
// Check for @Public() decorator // Check for @Public() decorator
const isPublic = this.reflector.getAllAndOverride<boolean>('isPublic', [context.getHandler(), context.getClass()]); const isPublic = this.reflector.getAllAndOverride<boolean>('isPublic', [
context.getHandler(),
context.getClass(),
]);
if (isPublic) return true; if (isPublic) return true;
const request = context.switchToHttp().getRequest(); const request = context.switchToHttp().getRequest();
@@ -82,7 +85,10 @@ export class RolesGuard implements CanActivate {
constructor(private reflector: Reflector) {} constructor(private reflector: Reflector) {}
canActivate(context: ExecutionContext): boolean { canActivate(context: ExecutionContext): boolean {
const requiredRoles = this.reflector.getAllAndOverride<Role[]>('roles', [context.getHandler(), context.getClass()]); const requiredRoles = this.reflector.getAllAndOverride<Role[]>('roles', [
context.getHandler(),
context.getClass(),
]);
if (!requiredRoles) return true; if (!requiredRoles) return true;
@@ -51,7 +51,7 @@ async function bootstrap() {
transformOptions: { transformOptions: {
enableImplicitConversion: true, enableImplicitConversion: true,
}, },
}) }),
); );
await app.listen(3000); await app.listen(3000);
@@ -61,7 +61,7 @@ describe('UsersController (e2e)', () => {
whitelist: true, whitelist: true,
transform: true, transform: true,
forbidNonWhitelisted: true, forbidNonWhitelisted: true,
}) }),
); );
await app.init(); await app.init();
@@ -97,7 +97,9 @@ describe('UsersController (e2e)', () => {
describe('/users/:id (GET)', () => { describe('/users/:id (GET)', () => {
it('should return 404 for non-existent user', () => { it('should return 404 for non-existent user', () => {
return request(app.getHttpServer()).get('/users/non-existent-id').expect(404); return request(app.getHttpServer())
.get('/users/non-existent-id')
.expect(404);
}); });
}); });
}); });
@@ -125,7 +127,9 @@ describe('Protected Routes (e2e)', () => {
}); });
it('should return 401 without token', () => { it('should return 401 without token', () => {
return request(app.getHttpServer()).get('/users/me').expect(401); return request(app.getHttpServer())
.get('/users/me')
.expect(401);
}); });
it('should return user profile with valid token', () => { it('should return user profile with valid token', () => {
@@ -84,7 +84,9 @@ describe('WeatherService', () => {
}); });
it('should handle API timeout', async () => { it('should handle API timeout', async () => {
httpService.get.mockReturnValue(throwError(() => new Error('ETIMEDOUT'))); httpService.get.mockReturnValue(
throwError(() => new Error('ETIMEDOUT')),
);
await expect(service.getWeather('NYC')).rejects.toThrow('Weather service unavailable'); await expect(service.getWeather('NYC')).rejects.toThrow('Weather service unavailable');
}); });
@@ -93,7 +95,7 @@ describe('WeatherService', () => {
httpService.get.mockReturnValue( httpService.get.mockReturnValue(
throwError(() => ({ throwError(() => ({
response: { status: 429, data: { message: 'Rate limited' } }, response: { status: 429, data: { message: 'Rate limited' } },
})) })),
); );
await expect(service.getWeather('NYC')).rejects.toThrow(TooManyRequestsException); await expect(service.getWeather('NYC')).rejects.toThrow(TooManyRequestsException);
@@ -115,7 +117,10 @@ describe('UsersService', () => {
}; };
const module = await Test.createTestingModule({ const module = await Test.createTestingModule({
providers: [UsersService, { provide: getRepositoryToken(User), useValue: mockRepo }], providers: [
UsersService,
{ provide: getRepositoryToken(User), useValue: mockRepo },
],
}).compile(); }).compile();
service = module.get(UsersService); service = module.get(UsersService);
@@ -86,7 +86,9 @@ describe('UsersService', () => {
it('should throw on duplicate email', async () => { it('should throw on duplicate email', async () => {
repo.findOne.mockResolvedValue({ id: '1', email: 'test@test.com' }); repo.findOne.mockResolvedValue({ id: '1', email: 'test@test.com' });
await expect(service.create({ name: 'Test', email: 'test@test.com' })).rejects.toThrow(ConflictException); await expect(
service.create({ name: 'Test', email: 'test@test.com' }),
).rejects.toThrow(ConflictException);
}); });
}); });
@@ -32,7 +32,6 @@ const CATEGORIES = [
{ prefix: 'api-', name: 'API Design', impact: 'MEDIUM', section: 8 }, { prefix: 'api-', name: 'API Design', impact: 'MEDIUM', section: 8 },
{ prefix: 'micro-', name: 'Microservices', impact: 'MEDIUM', section: 9 }, { prefix: 'micro-', name: 'Microservices', impact: 'MEDIUM', section: 9 },
{ prefix: 'devops-', name: 'DevOps & Deployment', impact: 'LOW-MEDIUM', section: 10 }, { prefix: 'devops-', name: 'DevOps & Deployment', impact: 'LOW-MEDIUM', section: 10 },
{ prefix: 'lcbp3-', name: 'LCBP3 Project-Specific', impact: 'CRITICAL', section: 11 },
]; ];
interface RuleFrontmatter { interface RuleFrontmatter {
@@ -51,10 +50,8 @@ interface Rule {
} }
function parseFrontmatter(content: string): { frontmatter: RuleFrontmatter | null; body: string } { function parseFrontmatter(content: string): { frontmatter: RuleFrontmatter | null; body: string } {
// Normalize CRLF → LF so the regex works on Windows-authored files
const normalized = content.replace(/\r\n/g, '\n');
const frontmatterRegex = /^---\n([\s\S]*?)\n---\n([\s\S]*)$/; const frontmatterRegex = /^---\n([\s\S]*?)\n---\n([\s\S]*)$/;
const match = normalized.match(frontmatterRegex); const match = content.match(frontmatterRegex);
if (!match) { if (!match) {
return { frontmatter: null, body: content }; return { frontmatter: null, body: content };
@@ -101,7 +98,7 @@ function parseFrontmatter(content: string): { frontmatter: RuleFrontmatter | nul
return { return {
frontmatter: frontmatter as RuleFrontmatter, frontmatter: frontmatter as RuleFrontmatter,
body: body.trim(), body: body.trim()
}; };
} }
@@ -121,7 +118,8 @@ function readMetadata(): any {
function readRules(): Rule[] { function readRules(): Rule[] {
const rulesDir = path.join(__dirname, '..', 'rules'); const rulesDir = path.join(__dirname, '..', 'rules');
const files = fs.readdirSync(rulesDir).filter((f) => f.endsWith('.md') && !f.startsWith('_')); const files = fs.readdirSync(rulesDir)
.filter(f => f.endsWith('.md') && !f.startsWith('_'));
const rules: Rule[] = []; const rules: Rule[] = [];
@@ -146,7 +144,7 @@ function readRules(): Rule[] {
frontmatter, frontmatter,
content: body, content: body,
category: category.name, category: category.name,
categorySection: category.section, categorySection: category.section
}); });
} }
+3 -212
View File
@@ -1,8 +1,6 @@
--- ---
name: next-best-practices name: next-best-practices
description: Next.js best practices for LCBP3-DMS frontend. Enforces ADR-019 (publicId only, no parseInt/id fallback), TanStack Query + RHF + Zod, shadcn/ui, i18n, ADR-007 error UX, ADR-021 IntegratedBanner/WorkflowLifecycle, two-phase file upload. description: Next.js best practices - file conventions, RSC boundaries, data patterns, async APIs, metadata, error handling, route handlers, image/font optimization, bundling
version: 1.8.9
scope: frontend
user-invocable: false user-invocable: false
--- ---
@@ -13,7 +11,6 @@ Apply these rules when writing or reviewing Next.js code.
## File Conventions ## File Conventions
See [file-conventions.md](./file-conventions.md) for: See [file-conventions.md](./file-conventions.md) for:
- Project structure and special files - Project structure and special files
- Route segments (dynamic, catch-all, groups) - Route segments (dynamic, catch-all, groups)
- Parallel and intercepting routes - Parallel and intercepting routes
@@ -24,7 +21,6 @@ See [file-conventions.md](./file-conventions.md) for:
Detect invalid React Server Component patterns. Detect invalid React Server Component patterns.
See [rsc-boundaries.md](./rsc-boundaries.md) for: See [rsc-boundaries.md](./rsc-boundaries.md) for:
- Async client component detection (invalid) - Async client component detection (invalid)
- Non-serializable props detection - Non-serializable props detection
- Server Action exceptions - Server Action exceptions
@@ -34,7 +30,6 @@ See [rsc-boundaries.md](./rsc-boundaries.md) for:
Next.js 15+ async API changes. Next.js 15+ async API changes.
See [async-patterns.md](./async-patterns.md) for: See [async-patterns.md](./async-patterns.md) for:
- Async `params` and `searchParams` - Async `params` and `searchParams`
- Async `cookies()` and `headers()` - Async `cookies()` and `headers()`
- Migration codemod - Migration codemod
@@ -42,21 +37,18 @@ See [async-patterns.md](./async-patterns.md) for:
## Runtime Selection ## Runtime Selection
See [runtime-selection.md](./runtime-selection.md) for: See [runtime-selection.md](./runtime-selection.md) for:
- Default to Node.js runtime - Default to Node.js runtime
- When Edge runtime is appropriate - When Edge runtime is appropriate
## Directives ## Directives
See [directives.md](./directives.md) for: See [directives.md](./directives.md) for:
- `'use client'`, `'use server'` (React) - `'use client'`, `'use server'` (React)
- `'use cache'` (Next.js) - `'use cache'` (Next.js)
## Functions ## Functions
See [functions.md](./functions.md) for: See [functions.md](./functions.md) for:
- Navigation hooks: `useRouter`, `usePathname`, `useSearchParams`, `useParams` - Navigation hooks: `useRouter`, `usePathname`, `useSearchParams`, `useParams`
- Server functions: `cookies`, `headers`, `draftMode`, `after` - Server functions: `cookies`, `headers`, `draftMode`, `after`
- Generate functions: `generateStaticParams`, `generateMetadata` - Generate functions: `generateStaticParams`, `generateMetadata`
@@ -64,7 +56,6 @@ See [functions.md](./functions.md) for:
## Error Handling ## Error Handling
See [error-handling.md](./error-handling.md) for: See [error-handling.md](./error-handling.md) for:
- `error.tsx`, `global-error.tsx`, `not-found.tsx` - `error.tsx`, `global-error.tsx`, `not-found.tsx`
- `redirect`, `permanentRedirect`, `notFound` - `redirect`, `permanentRedirect`, `notFound`
- `forbidden`, `unauthorized` (auth errors) - `forbidden`, `unauthorized` (auth errors)
@@ -72,10 +63,7 @@ See [error-handling.md](./error-handling.md) for:
## Data Patterns ## Data Patterns
Project-specific: See [uuid-handling.md](./uuid-handling.md) for ADR-019 UUID handling patterns.
See [data-patterns.md](./data-patterns.md) for: See [data-patterns.md](./data-patterns.md) for:
- Server Components vs Server Actions vs Route Handlers - Server Components vs Server Actions vs Route Handlers
- Avoiding data waterfalls (`Promise.all`, Suspense, preload) - Avoiding data waterfalls (`Promise.all`, Suspense, preload)
- Client component data fetching - Client component data fetching
@@ -83,7 +71,6 @@ See [data-patterns.md](./data-patterns.md) for:
## Route Handlers ## Route Handlers
See [route-handlers.md](./route-handlers.md) for: See [route-handlers.md](./route-handlers.md) for:
- `route.ts` basics - `route.ts` basics
- GET handler conflicts with `page.tsx` - GET handler conflicts with `page.tsx`
- Environment behavior (no React DOM) - Environment behavior (no React DOM)
@@ -92,7 +79,6 @@ See [route-handlers.md](./route-handlers.md) for:
## Metadata & OG Images ## Metadata & OG Images
See [metadata.md](./metadata.md) for: See [metadata.md](./metadata.md) for:
- Static and dynamic metadata - Static and dynamic metadata
- `generateMetadata` function - `generateMetadata` function
- OG image generation with `next/og` - OG image generation with `next/og`
@@ -101,7 +87,6 @@ See [metadata.md](./metadata.md) for:
## Image Optimization ## Image Optimization
See [image.md](./image.md) for: See [image.md](./image.md) for:
- Always use `next/image` over `<img>` - Always use `next/image` over `<img>`
- Remote images configuration - Remote images configuration
- Responsive `sizes` attribute - Responsive `sizes` attribute
@@ -111,7 +96,6 @@ See [image.md](./image.md) for:
## Font Optimization ## Font Optimization
See [font.md](./font.md) for: See [font.md](./font.md) for:
- `next/font` setup - `next/font` setup
- Google Fonts, local fonts - Google Fonts, local fonts
- Tailwind CSS integration - Tailwind CSS integration
@@ -120,7 +104,6 @@ See [font.md](./font.md) for:
## Bundling ## Bundling
See [bundling.md](./bundling.md) for: See [bundling.md](./bundling.md) for:
- Server-incompatible packages - Server-incompatible packages
- CSS imports (not link tags) - CSS imports (not link tags)
- Polyfills (already included) - Polyfills (already included)
@@ -130,7 +113,6 @@ See [bundling.md](./bundling.md) for:
## Scripts ## Scripts
See [scripts.md](./scripts.md) for: See [scripts.md](./scripts.md) for:
- `next/script` vs native script tags - `next/script` vs native script tags
- Inline scripts need `id` - Inline scripts need `id`
- Loading strategies - Loading strategies
@@ -139,7 +121,6 @@ See [scripts.md](./scripts.md) for:
## Hydration Errors ## Hydration Errors
See [hydration-error.md](./hydration-error.md) for: See [hydration-error.md](./hydration-error.md) for:
- Common causes (browser APIs, dates, invalid HTML) - Common causes (browser APIs, dates, invalid HTML)
- Debugging with error overlay - Debugging with error overlay
- Fixes for each cause - Fixes for each cause
@@ -147,216 +128,26 @@ See [hydration-error.md](./hydration-error.md) for:
## Suspense Boundaries ## Suspense Boundaries
See [suspense-boundaries.md](./suspense-boundaries.md) for: See [suspense-boundaries.md](./suspense-boundaries.md) for:
- CSR bailout with `useSearchParams` and `usePathname` - CSR bailout with `useSearchParams` and `usePathname`
- Which hooks require Suspense boundaries - Which hooks require Suspense boundaries
## Parallel & Intercepting Routes ## Parallel & Intercepting Routes
See [parallel-routes.md](./parallel-routes.md) for: See [parallel-routes.md](./parallel-routes.md) for:
- Modal patterns with `@slot` and `(.)` interceptors - Modal patterns with `@slot` and `(.)` interceptors
- `default.tsx` for fallbacks - `default.tsx` for fallbacks
- Closing modals correctly with `router.back()` - Closing modals correctly with `router.back()`
## i18n (Thai / English)
See [i18n.md](./i18n.md) for:
- `useTranslations('namespace')` pattern
- Key naming (kebab-case, feature-namespaced)
- When Zod messages stay inline vs i18n
- Server-side `userMessage` passthrough
## Two-Phase File Upload
See [two-phase-upload.md](./two-phase-upload.md) for:
- `useDropzone` + `useMutation` hook
- `tempFileIds` form-state pattern
- Whitelist MIME / max-size (must mirror backend)
- Clear-on-submit / expired-temp handling
## Self-Hosting ## Self-Hosting
See [self-hosting.md](./self-hosting.md) for: See [self-hosting.md](./self-hosting.md) for:
- `output: 'standalone'` for Docker - `output: 'standalone'` for Docker
- Cache handlers for multi-instance ISR - Cache handlers for multi-instance ISR
- What works vs needs extra setup - What works vs needs extra setup
## NAP-DMS Project-Specific Rules (MUST FOLLOW) ## Debug Tricks
These rules are mandatory for the NAP-DMS LCBP3 frontend project:
### State Management (บังคับใช้)
**Server State - TanStack Query (React Query)**
```tsx
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';
// ❌ ห้ามใช้ useEffect โดยตรง
// ✅ ใช้ TanStack Query
export function useCorrespondences(projectId: string) {
return useQuery({
queryKey: ['correspondences', projectId],
queryFn: () => correspondenceService.getAll(projectId),
staleTime: 5 * 60 * 1000,
});
}
```
**Form State - React Hook Form + Zod**
```tsx
import { useForm } from 'react-hook-form';
import { zodResolver } from '@hookform/resolvers/zod';
import * as z from 'zod';
const schema = z.object({
title: z.string().min(1, 'กรุณาระบุหัวเรื่อง'),
projectUuid: z.string().uuid('กรุณาเลือกโปรเจกต์'),
});
const form = useForm({
resolver: zodResolver(schema),
});
```
### ADR-019 UUID Handling (CRITICAL — March 2026 Pattern)
> **Updated:** ใช้ `publicId` ตรงๆ — ห้ามใช้ `id ?? ''` fallback หรือ `uuid` ร่วม.
```tsx
// ✅ CORRECT — Interface มีแค่ publicId
interface Contract {
publicId?: string; // UUID from API — ใช้ตัวนี้
contractCode: string;
contractName: string;
}
// ✅ CORRECT — Select options (ไม่มี fallback)
const options = contracts.map((c) => ({
label: `${c.contractName} (${c.contractCode})`,
value: c.publicId ?? '', // ใช้ publicId ล้วน
key: c.publicId ?? c.contractCode, // fallback ไป business field ได้
}));
// ❌ WRONG — pattern เก่า (ห้าม)
interface OldContract {
id?: number; // ❌ อย่า expose INT id
uuid?: string; // ❌ ใช้ชื่อ uuid
publicId?: string;
}
const oldValue = String(c.publicId ?? c.id ?? ''); // ❌ `id ?? ''` fallback ห้าม
// ❌ NEVER parseInt on UUID
// const badId = parseInt(projectPublicId); // "019505..." → 19 (WRONG!)
// ✅ ส่ง UUID string ตรงๆ ไป API
apiClient.get(`/projects/${projectPublicId}`);
```
### Naming Conventions
**Code Identifiers - ภาษาอังกฤษ**
```tsx
// ✅ Correct
interface Correspondence {
documentNumber: string;
createdAt: string;
}
// ❌ Wrong
interface เอกสาร {
เลขที่: string;
}
```
**Comments - ภาษาไทย**
```tsx
// ✅ Correct - อธิบาย logic เป็นภาษาไทย
// ตรวจสอบว่ามีการระบุ projectUuid หรือไม่
if (!data.projectUuid) {
throw new Error('กรุณาเลือกโปรเจกต์');
}
// ❌ Wrong - ห้ามใช้ภาษาอังกฤษใน comments
// Check if projectUuid is provided
```
### UI Components
**บังคับใช้ shadcn/ui**
```tsx
// ✅ Correct
import { Button } from '@/components/ui/button';
import { Card, CardContent } from '@/components/ui/card';
// ❌ Wrong - ไม่สร้าง component เองถ้ามีใน shadcn
const MyButton = () => <button className="...">Click</button>;
```
### File Upload Pattern
```tsx
import { useDropzone } from 'react-dropzone';
// Two-phase upload
const onDrop = useCallback(async (files: File[]) => {
// Phase 1: Upload to temp
const tempFiles = await Promise.all(files.map((file) => uploadService.uploadTemp(file)));
setTempIds(tempFiles.map((f) => f.tempId));
}, []);
// Phase 2: Commit on form submit
const onSubmit = async (data: FormData) => {
await correspondenceService.create({
...data,
tempFileIds,
});
};
```
### API Client Setup
```typescript
// lib/api/client.ts
const apiClient = axios.create({
baseURL: process.env.NEXT_PUBLIC_API_URL,
timeout: 30000,
});
// Auto-add Idempotency-Key
apiClient.interceptors.request.use((config) => {
if (['post', 'put', 'patch'].includes(config.method?.toLowerCase() || '')) {
config.headers['Idempotency-Key'] = uuidv4();
}
return config;
});
```
### Anti-Patterns (ห้ามทำ)
- ❌ Fetch data ใน useEffect โดยตรง (ใช้ TanStack Query)
- ❌ Props drilling ลึกเกิน 3 levels
- ❌ Inline styles (ใช้ Tailwind)
- ❌ `console.log` ใน committed code
- ❌ `parseInt()` / `Number()` / `+` บน UUID values (ADR-019)
- ❌ `id ?? ''` fallback บน `publicId` (ใช้ `publicId ?? ''` หรือ fallback ไป business field)
- ❌ Expose `uuid` คู่กับ `publicId` ใน interface (ใช้ `publicId` อย่างเดียว)
- ❌ ใช้ index เป็น key ใน list
- ❌ Snake_case ใน form field names (ใช้ camelCase)
- ❌ Hardcode Thai/English string ใน component (ใช้ i18n keys)
- ❌ `any` type (strict mode)
---
See [debug-tricks.md](./debug-tricks.md) for: See [debug-tricks.md](./debug-tricks.md) for:
- MCP endpoint for AI-assisted debugging - MCP endpoint for AI-assisted debugging
- Rebuild specific routes with `--debug-build-paths` - Rebuild specific routes with `--debug-build-paths`
@@ -9,18 +9,21 @@ Always type them as `Promise<...>` and await them.
### Pages and Layouts ### Pages and Layouts
```tsx ```tsx
type Props = { params: Promise<{ slug: string }> }; type Props = { params: Promise<{ slug: string }> }
export default async function Page({ params }: Props) { export default async function Page({ params }: Props) {
const { slug } = await params; const { slug } = await params
} }
``` ```
### Route Handlers ### Route Handlers
```tsx ```tsx
export async function GET(request: Request, { params }: { params: Promise<{ id: string }> }) { export async function GET(
const { id } = await params; request: Request,
{ params }: { params: Promise<{ id: string }> }
) {
const { id } = await params
} }
``` ```
@@ -28,13 +31,13 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
```tsx ```tsx
type Props = { type Props = {
params: Promise<{ slug: string }>; params: Promise<{ slug: string }>
searchParams: Promise<{ query?: string }>; searchParams: Promise<{ query?: string }>
}; }
export default async function Page({ params, searchParams }: Props) { export default async function Page({ params, searchParams }: Props) {
const { slug } = await params; const { slug } = await params
const { query } = await searchParams; const { query } = await searchParams
} }
``` ```
@@ -43,37 +46,37 @@ export default async function Page({ params, searchParams }: Props) {
Use `React.use()` for non-async components: Use `React.use()` for non-async components:
```tsx ```tsx
import { use } from 'react'; import { use } from 'react'
type Props = { params: Promise<{ slug: string }> }; type Props = { params: Promise<{ slug: string }> }
export default function Page({ params }: Props) { export default function Page({ params }: Props) {
const { slug } = use(params); const { slug } = use(params)
} }
``` ```
### generateMetadata ### generateMetadata
```tsx ```tsx
type Props = { params: Promise<{ slug: string }> }; type Props = { params: Promise<{ slug: string }> }
export async function generateMetadata({ params }: Props): Promise<Metadata> { export async function generateMetadata({ params }: Props): Promise<Metadata> {
const { slug } = await params; const { slug } = await params
return { title: slug }; return { title: slug }
} }
``` ```
## Async Cookies and Headers ## Async Cookies and Headers
```tsx ```tsx
import { cookies, headers } from 'next/headers'; import { cookies, headers } from 'next/headers'
export default async function Page() { export default async function Page() {
const cookieStore = await cookies(); const cookieStore = await cookies()
const headersList = await headers(); const headersList = await headers()
const theme = cookieStore.get('theme'); const theme = cookieStore.get('theme')
const userAgent = headersList.get('user-agent'); const userAgent = headersList.get('user-agent')
} }
``` ```
+17 -19
View File
@@ -21,21 +21,21 @@ If the package is only needed on client:
```tsx ```tsx
// Bad: Fails - package uses window // Bad: Fails - package uses window
import SomeChart from 'some-chart-library'; import SomeChart from 'some-chart-library'
export default function Page() { export default function Page() {
return <SomeChart />; return <SomeChart />
} }
// Good: Use dynamic import with ssr: false // Good: Use dynamic import with ssr: false
import dynamic from 'next/dynamic'; import dynamic from 'next/dynamic'
const SomeChart = dynamic(() => import('some-chart-library'), { const SomeChart = dynamic(() => import('some-chart-library'), {
ssr: false, ssr: false,
}); })
export default function Page() { export default function Page() {
return <SomeChart />; return <SomeChart />
} }
``` ```
@@ -47,11 +47,10 @@ For packages that should run on server but have bundling issues:
// next.config.js // next.config.js
module.exports = { module.exports = {
serverExternalPackages: ['problematic-package'], serverExternalPackages: ['problematic-package'],
}; }
``` ```
Use this for: Use this for:
- Packages with native bindings (sharp, bcrypt) - Packages with native bindings (sharp, bcrypt)
- Packages that don't bundle well (some ORMs) - Packages that don't bundle well (some ORMs)
- Packages with circular dependencies - Packages with circular dependencies
@@ -62,19 +61,19 @@ Wrap the entire usage in a client component:
```tsx ```tsx
// components/ChartWrapper.tsx // components/ChartWrapper.tsx
'use client'; 'use client'
import { Chart } from 'chart-library'; import { Chart } from 'chart-library'
export function ChartWrapper(props) { export function ChartWrapper(props) {
return <Chart {...props} />; return <Chart {...props} />
} }
// app/page.tsx (server component) // app/page.tsx (server component)
import { ChartWrapper } from '@/components/ChartWrapper'; import { ChartWrapper } from '@/components/ChartWrapper'
export default function Page() { export default function Page() {
return <ChartWrapper data={data} />; return <ChartWrapper data={data} />
} }
``` ```
@@ -84,13 +83,13 @@ Import CSS files instead of using `<link>` tags. Next.js handles bundling and op
```tsx ```tsx
// Bad: Manual link tag // Bad: Manual link tag
<link rel="stylesheet" href="/styles.css" />; <link rel="stylesheet" href="/styles.css" />
// Good: Import CSS // Good: Import CSS
import './styles.css'; import './styles.css'
// Good: CSS Modules // Good: CSS Modules
import styles from './Button.module.css'; import styles from './Button.module.css'
``` ```
## Polyfills ## Polyfills
@@ -122,13 +121,13 @@ Module not found: ESM packages need to be imported
// next.config.js // next.config.js
module.exports = { module.exports = {
transpilePackages: ['some-esm-package', 'another-package'], transpilePackages: ['some-esm-package', 'another-package'],
}; }
``` ```
## Common Problematic Packages ## Common Problematic Packages
| Package | Issue | Solution | | Package | Issue | Solution |
| --------------- | --------------- | --------------------------------------------------------------- | |---------|-------|----------|
| `sharp` | Native bindings | `serverExternalPackages: ['sharp']` | | `sharp` | Native bindings | `serverExternalPackages: ['sharp']` |
| `bcrypt` | Native bindings | `serverExternalPackages: ['bcrypt']` or use `bcryptjs` | | `bcrypt` | Native bindings | `serverExternalPackages: ['bcrypt']` or use `bcryptjs` |
| `canvas` | Native bindings | `serverExternalPackages: ['canvas']` | | `canvas` | Native bindings | `serverExternalPackages: ['canvas']` |
@@ -147,7 +146,6 @@ next experimental-analyze
``` ```
This opens an interactive UI to: This opens an interactive UI to:
- Filter by route, environment (client/server), and type - Filter by route, environment (client/server), and type
- Inspect module sizes and import chains - Inspect module sizes and import chains
- View treemap visualization - View treemap visualization
@@ -176,7 +174,7 @@ module.exports = {
webpack: (config) => { webpack: (config) => {
// custom webpack config // custom webpack config
}, },
}; }
``` ```
Reference: https://nextjs.org/docs/app/building-your-application/upgrading/from-webpack-to-turbopack Reference: https://nextjs.org/docs/app/building-your-application/upgrading/from-webpack-to-turbopack
@@ -33,20 +33,17 @@ async function UsersPage() {
const users = await db.user.findMany(); const users = await db.user.findMany();
// Or fetch from external API // Or fetch from external API
const posts = await fetch('https://api.example.com/posts').then((r) => r.json()); const posts = await fetch('https://api.example.com/posts').then(r => r.json());
return ( return (
<ul> <ul>
{users.map((user) => ( {users.map(user => <li key={user.id}>{user.name}</li>)}
<li key={user.id}>{user.name}</li>
))}
</ul> </ul>
); );
} }
``` ```
**Benefits**: **Benefits**:
- No API to maintain - No API to maintain
- No client-server waterfall - No client-server waterfall
- Secrets stay on server - Secrets stay on server
@@ -92,14 +89,12 @@ export default function NewPost() {
``` ```
**Benefits**: **Benefits**:
- End-to-end type safety - End-to-end type safety
- Progressive enhancement (works without JS) - Progressive enhancement (works without JS)
- Automatic request handling - Automatic request handling
- Integrated with React transitions - Integrated with React transitions
**Constraints**: **Constraints**:
- POST only (no GET caching semantics) - POST only (no GET caching semantics)
- Internal use only (no external access) - Internal use only (no external access)
- Cannot return non-serializable data - Cannot return non-serializable data
@@ -127,14 +122,12 @@ export async function POST(request: NextRequest) {
``` ```
**When to use**: **When to use**:
- External API access (mobile apps, third parties) - External API access (mobile apps, third parties)
- Webhooks from external services - Webhooks from external services
- GET endpoints that need HTTP caching - GET endpoints that need HTTP caching
- OpenAPI/Swagger documentation needed - OpenAPI/Swagger documentation needed
**When NOT to use**: **When NOT to use**:
- Internal data fetching (use Server Components) - Internal data fetching (use Server Components)
- Mutations from your UI (use Server Actions) - Mutations from your UI (use Server Actions)
@@ -158,7 +151,11 @@ async function Dashboard() {
```tsx ```tsx
// Good: Parallel fetching // Good: Parallel fetching
async function Dashboard() { async function Dashboard() {
const [user, posts, comments] = await Promise.all([getUser(), getPosts(), getComments()]); const [user, posts, comments] = await Promise.all([
getUser(),
getPosts(),
getComments(),
]);
return <div>...</div>; return <div>...</div>;
} }
@@ -241,7 +238,7 @@ async function Page() {
} }
// Client Component // Client Component
('use client'); 'use client';
function ClientComponent({ initialData }) { function ClientComponent({ initialData }) {
const [data, setData] = useState(initialData); const [data, setData] = useState(initialData);
// ... // ...
@@ -259,7 +256,7 @@ function ClientComponent() {
useEffect(() => { useEffect(() => {
fetch('/api/data') fetch('/api/data')
.then((r) => r.json()) .then(r => r.json())
.then(setData); .then(setData);
}, []); }, []);
@@ -293,7 +290,7 @@ function ClientComponent() {
## Quick Reference ## Quick Reference
| Pattern | Use Case | HTTP Method | Caching | | Pattern | Use Case | HTTP Method | Caching |
| ---------------------- | --------------------------- | ----------- | -------------------- | |---------|----------|-------------|---------|
| Server Component fetch | Internal reads | Any | Full Next.js caching | | Server Component fetch | Internal reads | Any | Full Next.js caching |
| Server Action | Mutations, form submissions | POST only | No | | Server Action | Mutations, form submissions | POST only | No |
| Route Handler | External APIs, webhooks | Any | GET can be cached | | Route Handler | External APIs, webhooks | Any | GET can be cached |
@@ -35,58 +35,42 @@ curl -X POST http://localhost:<port>/_next/mcp \
### Available Tools ### Available Tools
#### `get_errors` #### `get_errors`
Get current errors from dev server (build errors, runtime errors with source-mapped stacks): Get current errors from dev server (build errors, runtime errors with source-mapped stacks):
```json ```json
{ "name": "get_errors", "arguments": {} } { "name": "get_errors", "arguments": {} }
``` ```
#### `get_routes` #### `get_routes`
Discover all routes by scanning filesystem: Discover all routes by scanning filesystem:
```json ```json
{ "name": "get_routes", "arguments": {} } { "name": "get_routes", "arguments": {} }
// Optional: { "name": "get_routes", "arguments": { "routerType": "app" } } // Optional: { "name": "get_routes", "arguments": { "routerType": "app" } }
``` ```
Returns: `{ "appRouter": ["/", "/api/users/[id]", ...], "pagesRouter": [...] }` Returns: `{ "appRouter": ["/", "/api/users/[id]", ...], "pagesRouter": [...] }`
#### `get_project_metadata` #### `get_project_metadata`
Get project path and dev server URL: Get project path and dev server URL:
```json ```json
{ "name": "get_project_metadata", "arguments": {} } { "name": "get_project_metadata", "arguments": {} }
``` ```
Returns: `{ "projectPath": "/path/to/project", "devServerUrl": "http://localhost:3000" }` Returns: `{ "projectPath": "/path/to/project", "devServerUrl": "http://localhost:3000" }`
#### `get_page_metadata` #### `get_page_metadata`
Get runtime metadata about current page render (requires active browser session): Get runtime metadata about current page render (requires active browser session):
```json ```json
{ "name": "get_page_metadata", "arguments": {} } { "name": "get_page_metadata", "arguments": {} }
``` ```
Returns segment trie data showing layouts, boundaries, and page components. Returns segment trie data showing layouts, boundaries, and page components.
#### `get_logs` #### `get_logs`
Get path to Next.js development log file: Get path to Next.js development log file:
```json ```json
{ "name": "get_logs", "arguments": {} } { "name": "get_logs", "arguments": {} }
``` ```
Returns path to `<distDir>/logs/next-development.log` Returns path to `<distDir>/logs/next-development.log`
#### `get_server_action_by_id` #### `get_server_action_by_id`
Locate a Server Action by ID: Locate a Server Action by ID:
```json ```json
{ "name": "get_server_action_by_id", "arguments": { "actionId": "<action-id>" } } { "name": "get_server_action_by_id", "arguments": { "actionId": "<action-id>" } }
``` ```
@@ -116,7 +100,6 @@ next build --debug-build-paths "/blog/[slug]"
``` ```
Use this to: Use this to:
- Quickly verify a build fix without full rebuild - Quickly verify a build fix without full rebuild
- Debug static generation issues for specific pages - Debug static generation issues for specific pages
- Iterate faster on build errors - Iterate faster on build errors
@@ -7,19 +7,18 @@ These are React directives, not Next.js specific.
### `'use client'` ### `'use client'`
Marks a component as a Client Component. Required for: Marks a component as a Client Component. Required for:
- React hooks (`useState`, `useEffect`, etc.) - React hooks (`useState`, `useEffect`, etc.)
- Event handlers (`onClick`, `onChange`) - Event handlers (`onClick`, `onChange`)
- Browser APIs (`window`, `localStorage`) - Browser APIs (`window`, `localStorage`)
```tsx ```tsx
'use client'; 'use client'
import { useState } from 'react'; import { useState } from 'react'
export function Counter() { export function Counter() {
const [count, setCount] = useState(0); const [count, setCount] = useState(0)
return <button onClick={() => setCount(count + 1)}>{count}</button>; return <button onClick={() => setCount(count + 1)}>{count}</button>
} }
``` ```
@@ -30,7 +29,7 @@ Reference: https://react.dev/reference/rsc/use-client
Marks a function as a Server Action. Can be passed to Client Components. Marks a function as a Server Action. Can be passed to Client Components.
```tsx ```tsx
'use server'; 'use server'
export async function submitForm(formData: FormData) { export async function submitForm(formData: FormData) {
// Runs on server // Runs on server
@@ -42,10 +41,10 @@ Or inline within a Server Component:
```tsx ```tsx
export default function Page() { export default function Page() {
async function submit() { async function submit() {
'use server'; 'use server'
// Runs on server // Runs on server
} }
return <form action={submit}>...</form>; return <form action={submit}>...</form>
} }
``` ```
@@ -60,10 +59,10 @@ Reference: https://react.dev/reference/rsc/use-server
Marks a function or component for caching. Part of Next.js Cache Components. Marks a function or component for caching. Part of Next.js Cache Components.
```tsx ```tsx
'use cache'; 'use cache'
export async function getCachedData() { export async function getCachedData() {
return await fetchData(); return await fetchData()
} }
``` ```
@@ -11,15 +11,21 @@ Reference: https://nextjs.org/docs/app/getting-started/error-handling
Catches errors in a route segment and its children: Catches errors in a route segment and its children:
```tsx ```tsx
'use client'; 'use client'
export default function Error({ error, reset }: { error: Error & { digest?: string }; reset: () => void }) { export default function Error({
error,
reset,
}: {
error: Error & { digest?: string }
reset: () => void
}) {
return ( return (
<div> <div>
<h2>Something went wrong!</h2> <h2>Something went wrong!</h2>
<button onClick={() => reset()}>Try again</button> <button onClick={() => reset()}>Try again</button>
</div> </div>
); )
} }
``` ```
@@ -30,9 +36,15 @@ export default function Error({ error, reset }: { error: Error & { digest?: stri
Catches errors in root layout: Catches errors in root layout:
```tsx ```tsx
'use client'; 'use client'
export default function GlobalError({ error, reset }: { error: Error & { digest?: string }; reset: () => void }) { export default function GlobalError({
error,
reset,
}: {
error: Error & { digest?: string }
reset: () => void
}) {
return ( return (
<html> <html>
<body> <body>
@@ -40,7 +52,7 @@ export default function GlobalError({ error, reset }: { error: Error & { digest?
<button onClick={() => reset()}>Try again</button> <button onClick={() => reset()}>Try again</button>
</body> </body>
</html> </html>
); )
} }
``` ```
@@ -95,7 +107,6 @@ async function createPost(formData: FormData) {
``` ```
Same applies to: Same applies to:
- `redirect()` - 307 temporary redirect - `redirect()` - 307 temporary redirect
- `permanentRedirect()` - 308 permanent redirect - `permanentRedirect()` - 308 permanent redirect
- `notFound()` - 404 not found - `notFound()` - 404 not found
@@ -105,15 +116,15 @@ Same applies to:
Use `unstable_rethrow()` to re-throw these errors in catch blocks: Use `unstable_rethrow()` to re-throw these errors in catch blocks:
```tsx ```tsx
import { unstable_rethrow } from 'next/navigation'; import { unstable_rethrow } from 'next/navigation'
async function action() { async function action() {
try { try {
// ... // ...
redirect('/success'); redirect('/success')
} catch (error) { } catch (error) {
unstable_rethrow(error); // Re-throws Next.js internal errors unstable_rethrow(error) // Re-throws Next.js internal errors
return { error: 'Something went wrong' }; return { error: 'Something went wrong' }
} }
} }
``` ```
@@ -121,13 +132,13 @@ async function action() {
## Redirects ## Redirects
```tsx ```tsx
import { redirect, permanentRedirect } from 'next/navigation'; import { redirect, permanentRedirect } from 'next/navigation'
// 307 Temporary - use for most cases // 307 Temporary - use for most cases
redirect('/new-path'); redirect('/new-path')
// 308 Permanent - use for URL migrations (cached by browsers) // 308 Permanent - use for URL migrations (cached by browsers)
permanentRedirect('/new-url'); permanentRedirect('/new-url')
``` ```
## Auth Errors ## Auth Errors
@@ -135,20 +146,20 @@ permanentRedirect('/new-url');
Trigger auth-related error pages: Trigger auth-related error pages:
```tsx ```tsx
import { forbidden, unauthorized } from 'next/navigation'; import { forbidden, unauthorized } from 'next/navigation'
async function Page() { async function Page() {
const session = await getSession(); const session = await getSession()
if (!session) { if (!session) {
unauthorized(); // Renders unauthorized.tsx (401) unauthorized() // Renders unauthorized.tsx (401)
} }
if (!session.hasAccess) { if (!session.hasAccess) {
forbidden(); // Renders forbidden.tsx (403) forbidden() // Renders forbidden.tsx (403)
} }
return <Dashboard />; return <Dashboard />
} }
``` ```
@@ -157,12 +168,12 @@ Create corresponding error pages:
```tsx ```tsx
// app/forbidden.tsx // app/forbidden.tsx
export default function Forbidden() { export default function Forbidden() {
return <div>You don't have access to this resource</div>; return <div>You don't have access to this resource</div>
} }
// app/unauthorized.tsx // app/unauthorized.tsx
export default function Unauthorized() { export default function Unauthorized() {
return <div>Please log in to continue</div>; return <div>Please log in to continue</div>
} }
``` ```
@@ -179,24 +190,24 @@ export default function NotFound() {
<h2>Not Found</h2> <h2>Not Found</h2>
<p>Could not find the requested resource</p> <p>Could not find the requested resource</p>
</div> </div>
); )
} }
``` ```
### Triggering Not Found ### Triggering Not Found
```tsx ```tsx
import { notFound } from 'next/navigation'; import { notFound } from 'next/navigation'
export default async function Page({ params }: { params: Promise<{ id: string }> }) { export default async function Page({ params }: { params: Promise<{ id: string }> }) {
const { id } = await params; const { id } = await params
const post = await getPost(id); const post = await getPost(id)
if (!post) { if (!post) {
notFound(); // Renders closest not-found.tsx notFound() // Renders closest not-found.tsx
} }
return <div>{post.title}</div>; return <div>{post.title}</div>
} }
``` ```
@@ -28,7 +28,7 @@ app/
## Special Files ## Special Files
| File | Purpose | | File | Purpose |
| --------------- | ---------------------------------------- | |------|---------|
| `page.tsx` | UI for a route segment | | `page.tsx` | UI for a route segment |
| `layout.tsx` | Shared UI for segment and children | | `layout.tsx` | Shared UI for segment and children |
| `loading.tsx` | Loading UI (Suspense boundary) | | `loading.tsx` | Loading UI (Suspense boundary) |
@@ -74,7 +74,6 @@ app/
``` ```
Conventions: Conventions:
- `(.)` - same level - `(.)` - same level
- `(..)` - one level up - `(..)` - one level up
- `(..)(..)` - two levels up - `(..)(..)` - two levels up
@@ -130,7 +129,7 @@ export const proxyConfig = {
``` ```
| Version | File | Export | Config | | Version | File | Export | Config |
| ------- | --------------- | -------------- | ------------- | |---------|------|--------|--------|
| v14-15 | `middleware.ts` | `middleware()` | `config` | | v14-15 | `middleware.ts` | `middleware()` | `config` |
| v16+ | `proxy.ts` | `proxy()` | `proxyConfig` | | v16+ | `proxy.ts` | `proxy()` | `proxyConfig` |
+27 -28
View File
@@ -6,45 +6,44 @@ Use `next/font` for automatic font optimization with zero layout shift.
```tsx ```tsx
// app/layout.tsx // app/layout.tsx
import { Inter } from 'next/font/google'; import { Inter } from 'next/font/google'
const inter = Inter({ subsets: ['latin'] }); const inter = Inter({ subsets: ['latin'] })
export default function RootLayout({ children }: { children: React.ReactNode }) { export default function RootLayout({ children }: { children: React.ReactNode }) {
return ( return (
<html lang="en" className={inter.className}> <html lang="en" className={inter.className}>
<body>{children}</body> <body>{children}</body>
</html> </html>
); )
} }
``` ```
## Multiple Fonts ## Multiple Fonts
```tsx ```tsx
import { Inter, Roboto_Mono } from 'next/font/google'; import { Inter, Roboto_Mono } from 'next/font/google'
const inter = Inter({ const inter = Inter({
subsets: ['latin'], subsets: ['latin'],
variable: '--font-inter', variable: '--font-inter',
}); })
const robotoMono = Roboto_Mono({ const robotoMono = Roboto_Mono({
subsets: ['latin'], subsets: ['latin'],
variable: '--font-roboto-mono', variable: '--font-roboto-mono',
}); })
export default function RootLayout({ children }: { children: React.ReactNode }) { export default function RootLayout({ children }: { children: React.ReactNode }) {
return ( return (
<html lang="en" className={`${inter.variable} ${robotoMono.variable}`}> <html lang="en" className={`${inter.variable} ${robotoMono.variable}`}>
<body>{children}</body> <body>{children}</body>
</html> </html>
); )
} }
``` ```
Use in CSS: Use in CSS:
```css ```css
body { body {
font-family: var(--font-inter); font-family: var(--font-inter);
@@ -62,35 +61,35 @@ code {
const inter = Inter({ const inter = Inter({
subsets: ['latin'], subsets: ['latin'],
weight: '400', weight: '400',
}); })
// Multiple weights // Multiple weights
const inter = Inter({ const inter = Inter({
subsets: ['latin'], subsets: ['latin'],
weight: ['400', '500', '700'], weight: ['400', '500', '700'],
}); })
// Variable font (recommended) - includes all weights // Variable font (recommended) - includes all weights
const inter = Inter({ const inter = Inter({
subsets: ['latin'], subsets: ['latin'],
// No weight needed - variable fonts support all weights // No weight needed - variable fonts support all weights
}); })
// With italic // With italic
const inter = Inter({ const inter = Inter({
subsets: ['latin'], subsets: ['latin'],
style: ['normal', 'italic'], style: ['normal', 'italic'],
}); })
``` ```
## Local Fonts ## Local Fonts
```tsx ```tsx
import localFont from 'next/font/local'; import localFont from 'next/font/local'
const myFont = localFont({ const myFont = localFont({
src: './fonts/MyFont.woff2', src: './fonts/MyFont.woff2',
}); })
// Multiple files for different weights // Multiple files for different weights
const myFont = localFont({ const myFont = localFont({
@@ -106,32 +105,32 @@ const myFont = localFont({
style: 'normal', style: 'normal',
}, },
], ],
}); })
// Variable font // Variable font
const myFont = localFont({ const myFont = localFont({
src: './fonts/MyFont-Variable.woff2', src: './fonts/MyFont-Variable.woff2',
variable: '--font-my-font', variable: '--font-my-font',
}); })
``` ```
## Tailwind CSS Integration ## Tailwind CSS Integration
```tsx ```tsx
// app/layout.tsx // app/layout.tsx
import { Inter } from 'next/font/google'; import { Inter } from 'next/font/google'
const inter = Inter({ const inter = Inter({
subsets: ['latin'], subsets: ['latin'],
variable: '--font-inter', variable: '--font-inter',
}); })
export default function RootLayout({ children }) { export default function RootLayout({ children }) {
return ( return (
<html lang="en" className={inter.variable}> <html lang="en" className={inter.variable}>
<body>{children}</body> <body>{children}</body>
</html> </html>
); )
} }
``` ```
@@ -145,7 +144,7 @@ module.exports = {
}, },
}, },
}, },
}; }
``` ```
## Preloading Subsets ## Preloading Subsets
@@ -154,10 +153,10 @@ Only load needed character subsets:
```tsx ```tsx
// Latin only (most common) // Latin only (most common)
const inter = Inter({ subsets: ['latin'] }); const inter = Inter({ subsets: ['latin'] })
// Multiple subsets // Multiple subsets
const inter = Inter({ subsets: ['latin', 'latin-ext', 'cyrillic'] }); const inter = Inter({ subsets: ['latin', 'latin-ext', 'cyrillic'] })
``` ```
## Display Strategy ## Display Strategy
@@ -168,7 +167,7 @@ Control font loading behavior:
const inter = Inter({ const inter = Inter({
subsets: ['latin'], subsets: ['latin'],
display: 'swap', // Default - shows fallback, swaps when loaded display: 'swap', // Default - shows fallback, swaps when loaded
}); })
// Options: // Options:
// 'auto' - browser decides // 'auto' - browser decides
@@ -232,15 +231,15 @@ const inter = Inter({ subsets: ['latin'] })
```tsx ```tsx
// For component-specific fonts, export from a shared file // For component-specific fonts, export from a shared file
// lib/fonts.ts // lib/fonts.ts
import { Inter, Playfair_Display } from 'next/font/google'; import { Inter, Playfair_Display } from 'next/font/google'
export const inter = Inter({ subsets: ['latin'], variable: '--font-inter' }); export const inter = Inter({ subsets: ['latin'], variable: '--font-inter' })
export const playfair = Playfair_Display({ subsets: ['latin'], variable: '--font-playfair' }); export const playfair = Playfair_Display({ subsets: ['latin'], variable: '--font-playfair' })
// components/Heading.tsx // components/Heading.tsx
import { playfair } from '@/lib/fonts'; import { playfair } from '@/lib/fonts'
export function Heading({ children }) { export function Heading({ children }) {
return <h1 className={playfair.className}>{children}</h1>; return <h1 className={playfair.className}>{children}</h1>
} }
``` ```
+19 -19
View File
@@ -7,7 +7,7 @@ Reference: https://nextjs.org/docs/app/api-reference/functions
## Navigation Hooks (Client) ## Navigation Hooks (Client)
| Hook | Purpose | Reference | | Hook | Purpose | Reference |
| --------------------------- | -------------------------------------------------------------- | ---------------------------------------------------------------------------------------- | |------|---------|-----------|
| `useRouter` | Programmatic navigation (`push`, `replace`, `back`, `refresh`) | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-router) | | `useRouter` | Programmatic navigation (`push`, `replace`, `back`, `refresh`) | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-router) |
| `usePathname` | Get current pathname | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-pathname) | | `usePathname` | Get current pathname | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-pathname) |
| `useSearchParams` | Read URL search parameters | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-search-params) | | `useSearchParams` | Read URL search parameters | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-search-params) |
@@ -20,7 +20,7 @@ Reference: https://nextjs.org/docs/app/api-reference/functions
## Server Functions ## Server Functions
| Function | Purpose | Reference | | Function | Purpose | Reference |
| ------------ | -------------------------------------------- | ---------------------------------------------------------------------- | |----------|---------|-----------|
| `cookies` | Read/write cookies | [Docs](https://nextjs.org/docs/app/api-reference/functions/cookies) | | `cookies` | Read/write cookies | [Docs](https://nextjs.org/docs/app/api-reference/functions/cookies) |
| `headers` | Read request headers | [Docs](https://nextjs.org/docs/app/api-reference/functions/headers) | | `headers` | Read request headers | [Docs](https://nextjs.org/docs/app/api-reference/functions/headers) |
| `draftMode` | Enable preview of unpublished CMS content | [Docs](https://nextjs.org/docs/app/api-reference/functions/draft-mode) | | `draftMode` | Enable preview of unpublished CMS content | [Docs](https://nextjs.org/docs/app/api-reference/functions/draft-mode) |
@@ -31,7 +31,7 @@ Reference: https://nextjs.org/docs/app/api-reference/functions
## Generate Functions ## Generate Functions
| Function | Purpose | Reference | | Function | Purpose | Reference |
| ----------------------- | --------------------------------------- | ----------------------------------------------------------------------------------- | |----------|---------|-----------|
| `generateStaticParams` | Pre-render dynamic routes at build time | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-static-params) | | `generateStaticParams` | Pre-render dynamic routes at build time | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-static-params) |
| `generateMetadata` | Dynamic metadata | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-metadata) | | `generateMetadata` | Dynamic metadata | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-metadata) |
| `generateViewport` | Dynamic viewport config | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-viewport) | | `generateViewport` | Dynamic viewport config | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-viewport) |
@@ -41,7 +41,7 @@ Reference: https://nextjs.org/docs/app/api-reference/functions
## Request/Response ## Request/Response
| Function | Purpose | Reference | | Function | Purpose | Reference |
| --------------- | ------------------------------ | -------------------------------------------------------------------------- | |----------|---------|-----------|
| `NextRequest` | Extended Request with helpers | [Docs](https://nextjs.org/docs/app/api-reference/functions/next-request) | | `NextRequest` | Extended Request with helpers | [Docs](https://nextjs.org/docs/app/api-reference/functions/next-request) |
| `NextResponse` | Extended Response with helpers | [Docs](https://nextjs.org/docs/app/api-reference/functions/next-response) | | `NextResponse` | Extended Response with helpers | [Docs](https://nextjs.org/docs/app/api-reference/functions/next-response) |
| `ImageResponse` | Generate OG images | [Docs](https://nextjs.org/docs/app/api-reference/functions/image-response) | | `ImageResponse` | Generate OG images | [Docs](https://nextjs.org/docs/app/api-reference/functions/image-response) |
@@ -54,30 +54,30 @@ Use `next/link` for internal navigation instead of `<a>` tags.
```tsx ```tsx
// Bad: Plain anchor tag // Bad: Plain anchor tag
<a href="/about">About</a>; <a href="/about">About</a>
// Good: Next.js Link // Good: Next.js Link
import Link from 'next/link'; import Link from 'next/link'
<Link href="/about">About</Link>; <Link href="/about">About</Link>
``` ```
Active link styling: Active link styling:
```tsx ```tsx
'use client'; 'use client'
import Link from 'next/link'; import Link from 'next/link'
import { usePathname } from 'next/navigation'; import { usePathname } from 'next/navigation'
export function NavLink({ href, children }) { export function NavLink({ href, children }) {
const pathname = usePathname(); const pathname = usePathname()
return ( return (
<Link href={href} className={pathname === href ? 'active' : ''}> <Link href={href} className={pathname === href ? 'active' : ''}>
{children} {children}
</Link> </Link>
); )
} }
``` ```
@@ -86,23 +86,23 @@ export function NavLink({ href, children }) {
```tsx ```tsx
// app/blog/[slug]/page.tsx // app/blog/[slug]/page.tsx
export async function generateStaticParams() { export async function generateStaticParams() {
const posts = await getPosts(); const posts = await getPosts()
return posts.map((post) => ({ slug: post.slug })); return posts.map((post) => ({ slug: post.slug }))
} }
``` ```
### After Response ### After Response
```tsx ```tsx
import { after } from 'next/server'; import { after } from 'next/server'
export async function POST(request: Request) { export async function POST(request: Request) {
const data = await processRequest(request); const data = await processRequest(request)
after(async () => { after(async () => {
await logAnalytics(data); await logAnalytics(data)
}); })
return Response.json({ success: true }); return Response.json({ success: true })
} }
``` ```
@@ -17,16 +17,16 @@ In development, click the hydration error to see the server/client diff.
```tsx ```tsx
// Bad: Causes mismatch - window doesn't exist on server // Bad: Causes mismatch - window doesn't exist on server
<div>{window.innerWidth}</div>; <div>{window.innerWidth}</div>
// Good: Use client component with mounted check // Good: Use client component with mounted check
('use client'); 'use client'
import { useState, useEffect } from 'react'; import { useState, useEffect } from 'react'
export function ClientOnly({ children }: { children: React.ReactNode }) { export function ClientOnly({ children }: { children: React.ReactNode }) {
const [mounted, setMounted] = useState(false); const [mounted, setMounted] = useState(false)
useEffect(() => setMounted(true), []); useEffect(() => setMounted(true), [])
return mounted ? children : null; return mounted ? children : null
} }
``` ```
@@ -36,12 +36,12 @@ Server and client may be in different timezones:
```tsx ```tsx
// Bad: Causes mismatch // Bad: Causes mismatch
<span>{new Date().toLocaleString()}</span>; <span>{new Date().toLocaleString()}</span>
// Good: Render on client only // Good: Render on client only
('use client'); 'use client'
const [time, setTime] = useState<string>(); const [time, setTime] = useState<string>()
useEffect(() => setTime(new Date().toLocaleString()), []); useEffect(() => setTime(new Date().toLocaleString()), [])
``` ```
### Random Values or IDs ### Random Values or IDs
@@ -78,9 +78,14 @@ Scripts that modify DOM during hydration.
```tsx ```tsx
// Good: Use next/script with afterInteractive // Good: Use next/script with afterInteractive
import Script from 'next/script'; import Script from 'next/script'
export default function Page() { export default function Page() {
return <Script src="https://example.com/script.js" strategy="afterInteractive" />; return (
<Script
src="https://example.com/script.js"
strategy="afterInteractive"
/>
)
} }
``` ```
@@ -1,79 +0,0 @@
# i18n (Thai / English)
LCBP3 frontend **must not** hardcode Thai or English UI strings in components.
## Rules
1. **All user-facing strings go through the i18n layer** (`next-intl` / `i18next` — check `frontend/package.json`).
2. **Keys use kebab-case**, namespaced by feature:
- `correspondence.list.title`
- `correspondence.form.submit`
- `common.actions.cancel`
3. **Comments in code remain Thai** (business logic explanation); **only UI copy** goes through i18n.
4. **Error messages** from backend (via ADR-007 `userMessage`) are already localized server-side — render them directly, don't translate client-side.
---
## ❌ Wrong
```tsx
export function CorrespondenceHeader() {
return <h1>รายการหนังสือติดต่อ</h1>; // ❌ hardcoded Thai
}
toast.success('บันทึกสำเร็จ'); // ❌ hardcoded
```
---
## ✅ Right
```tsx
import { useTranslations } from 'next-intl';
export function CorrespondenceHeader() {
const t = useTranslations('correspondence.list');
return <h1>{t('title')}</h1>;
}
toast.success(t('save.success'));
```
Translation files:
```json
// messages/th.json
{
"correspondence": {
"list": { "title": "รายการหนังสือติดต่อ" },
"save": { "success": "บันทึกสำเร็จ" }
}
}
// messages/en.json
{
"correspondence": {
"list": { "title": "Correspondence List" },
"save": { "success": "Saved successfully" }
}
}
```
---
## Zod Error Messages
Zod error messages shown in forms **do** stay in Thai inline (per `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`), because they're schema-bound and rarely need translation. If dual-language support becomes required, wrap with an i18n-aware resolver:
```ts
const schema = z.object({
projectUuid: z.string().uuid(t('validation.project.required')),
});
```
---
## Reference
- [i18n Guidelines](../../../specs/05-Engineering-Guidelines/05-08-i18n-guidelines.md)
- [Frontend Guidelines](../../../specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md)
+9 -9
View File
@@ -6,11 +6,11 @@ Use `next/image` for automatic image optimization.
```tsx ```tsx
// Bad: Avoid native img // Bad: Avoid native img
<img src="/hero.png" alt="Hero" />; <img src="/hero.png" alt="Hero" />
// Good: Use next/image // Good: Use next/image
import Image from 'next/image'; import Image from 'next/image'
<Image src="/hero.png" alt="Hero" width={800} height={400} />; <Image src="/hero.png" alt="Hero" width={800} height={400} />
``` ```
## Required Props ## Required Props
@@ -51,7 +51,7 @@ module.exports = {
}, },
], ],
}, },
}; }
``` ```
## Responsive Images ## Responsive Images
@@ -155,19 +155,19 @@ When using `output: 'export'`, use `unoptimized` or custom loader:
```tsx ```tsx
// Option 1: Disable optimization // Option 1: Disable optimization
<Image src="/hero.png" alt="Hero" width={800} height={400} unoptimized />; <Image src="/hero.png" alt="Hero" width={800} height={400} unoptimized />
// Option 2: Global config // Option 2: Global config
// next.config.js // next.config.js
module.exports = { module.exports = {
output: 'export', output: 'export',
images: { unoptimized: true }, images: { unoptimized: true },
}; }
// Option 3: Custom loader (Cloudinary, Imgix, etc.) // Option 3: Custom loader (Cloudinary, Imgix, etc.)
const cloudinaryLoader = ({ src, width, quality }) => { const cloudinaryLoader = ({ src, width, quality }) => {
return `https://res.cloudinary.com/demo/image/upload/w_${width},q_${quality || 75}/${src}`; return `https://res.cloudinary.com/demo/image/upload/w_${width},q_${quality || 75}/${src}`
}; }
<Image loader={cloudinaryLoader} src="sample.jpg" alt="Sample" width={800} height={400} />; <Image loader={cloudinaryLoader} src="sample.jpg" alt="Sample" width={800} height={400} />
``` ```
+60 -52
View File
@@ -7,7 +7,6 @@ Add SEO metadata to Next.js pages using the Metadata API.
The `metadata` object and `generateMetadata` function are **only supported in Server Components**. They cannot be used in Client Components. The `metadata` object and `generateMetadata` function are **only supported in Server Components**. They cannot be used in Client Components.
If the target page has `'use client'`: If the target page has `'use client'`:
1. Remove `'use client'` if possible, move client logic to child components 1. Remove `'use client'` if possible, move client logic to child components
2. Or extract metadata to a parent Server Component layout 2. Or extract metadata to a parent Server Component layout
3. Or split the file: Server Component with metadata imports Client Components 3. Or split the file: Server Component with metadata imports Client Components
@@ -15,25 +14,25 @@ If the target page has `'use client'`:
## Static Metadata ## Static Metadata
```tsx ```tsx
import type { Metadata } from 'next'; import type { Metadata } from 'next'
export const metadata: Metadata = { export const metadata: Metadata = {
title: 'Page Title', title: 'Page Title',
description: 'Page description for search engines', description: 'Page description for search engines',
}; }
``` ```
## Dynamic Metadata ## Dynamic Metadata
```tsx ```tsx
import type { Metadata } from 'next'; import type { Metadata } from 'next'
type Props = { params: Promise<{ slug: string }> }; type Props = { params: Promise<{ slug: string }> }
export async function generateMetadata({ params }: Props): Promise<Metadata> { export async function generateMetadata({ params }: Props): Promise<Metadata> {
const { slug } = await params; const { slug } = await params
const post = await getPost(slug); const post = await getPost(slug)
return { title: post.title, description: post.description }; return { title: post.title, description: post.description }
} }
``` ```
@@ -42,11 +41,11 @@ export async function generateMetadata({ params }: Props): Promise<Metadata> {
Use React `cache()` when the same data is needed for both metadata and page: Use React `cache()` when the same data is needed for both metadata and page:
```tsx ```tsx
import { cache } from 'react'; import { cache } from 'react'
export const getPost = cache(async (slug: string) => { export const getPost = cache(async (slug: string) => {
return await db.posts.findFirst({ where: { slug } }); return await db.posts.findFirst({ where: { slug } })
}); })
``` ```
## Viewport ## Viewport
@@ -54,17 +53,17 @@ export const getPost = cache(async (slug: string) => {
Separate from metadata for streaming support: Separate from metadata for streaming support:
```tsx ```tsx
import type { Viewport } from 'next'; import type { Viewport } from 'next'
export const viewport: Viewport = { export const viewport: Viewport = {
width: 'device-width', width: 'device-width',
initialScale: 1, initialScale: 1,
themeColor: '#000000', themeColor: '#000000',
}; }
// Or dynamic // Or dynamic
export function generateViewport({ params }): Viewport { export function generateViewport({ params }): Viewport {
return { themeColor: getThemeColor(params) }; return { themeColor: getThemeColor(params) }
} }
``` ```
@@ -75,7 +74,7 @@ In root layout for consistent naming:
```tsx ```tsx
export const metadata: Metadata = { export const metadata: Metadata = {
title: { default: 'Site Name', template: '%s | Site Name' }, title: { default: 'Site Name', template: '%s | Site Name' },
}; }
``` ```
## Metadata File Conventions ## Metadata File Conventions
@@ -85,7 +84,7 @@ Reference: https://nextjs.org/docs/app/getting-started/project-structure#metadat
Place these files in `app/` directory (or route segments): Place these files in `app/` directory (or route segments):
| File | Purpose | | File | Purpose |
| ------------------------------- | --------------------------------------------- | |------|---------|
| `favicon.ico` | Favicon | | `favicon.ico` | Favicon |
| `icon.png` / `icon.svg` | App icon | | `icon.png` / `icon.svg` | App icon |
| `apple-icon.png` | Apple app icon | | `apple-icon.png` | Apple app icon |
@@ -109,7 +108,6 @@ app/
``` ```
**Tips:** **Tips:**
- A single `opengraph-image.png` covers both Open Graph and Twitter (Twitter falls back to OG) - A single `opengraph-image.png` covers both Open Graph and Twitter (Twitter falls back to OG)
- Static `title` and `description` in layout metadata is sufficient for most pages - Static `title` and `description` in layout metadata is sufficient for most pages
- Only use dynamic `generateMetadata` when content varies per page - Only use dynamic `generateMetadata` when content varies per page
@@ -128,7 +126,7 @@ Generate dynamic Open Graph images using `next/og`.
```tsx ```tsx
// Good // Good
import { ImageResponse } from 'next/og'; import { ImageResponse } from 'next/og'
// Bad // Bad
// import { ImageResponse } from '@vercel/og' // import { ImageResponse } from '@vercel/og'
@@ -139,11 +137,11 @@ import { ImageResponse } from 'next/og';
```tsx ```tsx
// app/opengraph-image.tsx // app/opengraph-image.tsx
import { ImageResponse } from 'next/og'; import { ImageResponse } from 'next/og'
export const alt = 'Site Name'; export const alt = 'Site Name'
export const size = { width: 1200, height: 630 }; export const size = { width: 1200, height: 630 }
export const contentType = 'image/png'; export const contentType = 'image/png'
export default function Image() { export default function Image() {
return new ImageResponse( return new ImageResponse(
@@ -163,7 +161,7 @@ export default function Image() {
</div> </div>
), ),
{ ...size } { ...size }
); )
} }
``` ```
@@ -171,17 +169,17 @@ export default function Image() {
```tsx ```tsx
// app/blog/[slug]/opengraph-image.tsx // app/blog/[slug]/opengraph-image.tsx
import { ImageResponse } from 'next/og'; import { ImageResponse } from 'next/og'
export const alt = 'Blog Post'; export const alt = 'Blog Post'
export const size = { width: 1200, height: 630 }; export const size = { width: 1200, height: 630 }
export const contentType = 'image/png'; export const contentType = 'image/png'
type Props = { params: Promise<{ slug: string }> }; type Props = { params: Promise<{ slug: string }> }
export default async function Image({ params }: Props) { export default async function Image({ params }: Props) {
const { slug } = await params; const { slug } = await params
const post = await getPost(slug); const post = await getPost(slug)
return new ImageResponse( return new ImageResponse(
( (
@@ -204,26 +202,33 @@ export default async function Image({ params }: Props) {
</div> </div>
), ),
{ ...size } { ...size }
); )
} }
``` ```
## Custom Fonts ## Custom Fonts
```tsx ```tsx
import { ImageResponse } from 'next/og'; import { ImageResponse } from 'next/og'
import { join } from 'path'; import { join } from 'path'
import { readFile } from 'fs/promises'; import { readFile } from 'fs/promises'
export default async function Image() { export default async function Image() {
const fontPath = join(process.cwd(), 'assets/fonts/Inter-Bold.ttf'); const fontPath = join(process.cwd(), 'assets/fonts/Inter-Bold.ttf')
const fontData = await readFile(fontPath); const fontData = await readFile(fontPath)
return new ImageResponse(<div style={{ fontFamily: 'Inter', fontSize: 64 }}>Custom Font Text</div>, { return new ImageResponse(
(
<div style={{ fontFamily: 'Inter', fontSize: 64 }}>
Custom Font Text
</div>
),
{
width: 1200, width: 1200,
height: 630, height: 630,
fonts: [{ name: 'Inter', data: fontData, style: 'normal' }], fonts: [{ name: 'Inter', data: fontData, style: 'normal' }],
}); }
)
} }
``` ```
@@ -235,7 +240,6 @@ export default async function Image() {
## Styling Notes ## Styling Notes
ImageResponse uses Flexbox layout: ImageResponse uses Flexbox layout:
- Use `display: 'flex'` - Use `display: 'flex'`
- No CSS Grid support - No CSS Grid support
- Styles must be inline objects - Styles must be inline objects
@@ -246,22 +250,22 @@ Use `generateImageMetadata` for multiple images per route:
```tsx ```tsx
// app/blog/[slug]/opengraph-image.tsx // app/blog/[slug]/opengraph-image.tsx
import { ImageResponse } from 'next/og'; import { ImageResponse } from 'next/og'
export async function generateImageMetadata({ params }) { export async function generateImageMetadata({ params }) {
const images = await getPostImages(params.slug); const images = await getPostImages(params.slug)
return images.map((img, idx) => ({ return images.map((img, idx) => ({
id: idx, id: idx,
alt: img.alt, alt: img.alt,
size: { width: 1200, height: 630 }, size: { width: 1200, height: 630 },
contentType: 'image/png', contentType: 'image/png',
})); }))
} }
export default async function Image({ params, id }) { export default async function Image({ params, id }) {
const images = await getPostImages(params.slug); const images = await getPostImages(params.slug)
const image = images[id]; const image = images[id]
return new ImageResponse(/* ... */); return new ImageResponse(/* ... */)
} }
``` ```
@@ -271,22 +275,26 @@ Use `generateSitemaps` for large sites:
```tsx ```tsx
// app/sitemap.ts // app/sitemap.ts
import type { MetadataRoute } from 'next'; import type { MetadataRoute } from 'next'
export async function generateSitemaps() { export async function generateSitemaps() {
// Return array of sitemap IDs // Return array of sitemap IDs
return [{ id: 0 }, { id: 1 }, { id: 2 }]; return [{ id: 0 }, { id: 1 }, { id: 2 }]
} }
export default async function sitemap({ id }: { id: number }): Promise<MetadataRoute.Sitemap> { export default async function sitemap({
const start = id * 50000; id,
const end = start + 50000; }: {
const products = await getProducts(start, end); id: number
}): Promise<MetadataRoute.Sitemap> {
const start = id * 50000
const end = start + 50000
const products = await getProducts(start, end)
return products.map((product) => ({ return products.map((product) => ({
url: `https://example.com/product/${product.id}`, url: `https://example.com/product/${product.id}`,
lastModified: product.updatedAt, lastModified: product.updatedAt,
})); }))
} }
``` ```
@@ -24,7 +24,13 @@ app/
```tsx ```tsx
// app/layout.tsx // app/layout.tsx
export default function RootLayout({ children, modal }: { children: React.ReactNode; modal: React.ReactNode }) { export default function RootLayout({
children,
modal,
}: {
children: React.ReactNode;
modal: React.ReactNode;
}) {
return ( return (
<html> <html>
<body> <body>
@@ -57,7 +63,11 @@ The `(.)` prefix intercepts routes at the same level.
// app/@modal/(.)photos/[id]/page.tsx // app/@modal/(.)photos/[id]/page.tsx
import { Modal } from '@/components/modal'; import { Modal } from '@/components/modal';
export default async function PhotoModal({ params }: { params: Promise<{ id: string }> }) { export default async function PhotoModal({
params
}: {
params: Promise<{ id: string }>
}) {
const { id } = await params; const { id } = await params;
const photo = await getPhoto(id); const photo = await getPhoto(id);
@@ -73,7 +83,11 @@ export default async function PhotoModal({ params }: { params: Promise<{ id: str
```tsx ```tsx
// app/photos/[id]/page.tsx // app/photos/[id]/page.tsx
export default async function PhotoPage({ params }: { params: Promise<{ id: string }> }) { export default async function PhotoPage({
params
}: {
params: Promise<{ id: string }>
}) {
const { id } = await params; const { id } = await params;
const photo = await getPhoto(id); const photo = await getPhoto(id);
@@ -113,14 +127,11 @@ export function Modal({ children }: { children: React.ReactNode }) {
}, [router]); }, [router]);
// Close on overlay click // Close on overlay click
const handleOverlayClick = useCallback( const handleOverlayClick = useCallback((e: React.MouseEvent) => {
(e: React.MouseEvent) => {
if (e.target === overlayRef.current) { if (e.target === overlayRef.current) {
router.back(); // Correct router.back(); // Correct
} }
}, }, [router]);
[router]
);
return ( return (
<div <div
@@ -145,13 +156,11 @@ export function Modal({ children }: { children: React.ReactNode }) {
### Why NOT `router.push('/')` or `<Link href="/">`? ### Why NOT `router.push('/')` or `<Link href="/">`?
Using `push` or `Link` to "close" a modal: Using `push` or `Link` to "close" a modal:
1. Adds a new history entry (back button shows modal again) 1. Adds a new history entry (back button shows modal again)
2. Doesn't properly clear the intercepted route 2. Doesn't properly clear the intercepted route
3. Can cause the modal to flash or persist unexpectedly 3. Can cause the modal to flash or persist unexpectedly
`router.back()` correctly: `router.back()` correctly:
1. Removes the intercepted route from history 1. Removes the intercepted route from history
2. Returns to the previous page 2. Returns to the previous page
3. Properly unmounts the modal 3. Properly unmounts the modal
@@ -161,7 +170,7 @@ Using `push` or `Link` to "close" a modal:
Matchers match **route segments**, not filesystem paths: Matchers match **route segments**, not filesystem paths:
| Matcher | Matches | Example | | Matcher | Matches | Example |
| ---------- | ------------- | --------------------------------------------------------------------- | |---------|---------|---------|
| `(.)` | Same level | `@modal/(.)photos` intercepts `/photos` | | `(.)` | Same level | `@modal/(.)photos` intercepts `/photos` |
| `(..)` | One level up | `@modal/(..)settings` from `/dashboard/@modal` intercepts `/settings` | | `(..)` | One level up | `@modal/(..)settings` from `/dashboard/@modal` intercepts `/settings` |
| `(..)(..)` | Two levels up | Rarely used | | `(..)(..)` | Two levels up | Rarely used |
@@ -172,7 +181,6 @@ Matchers match **route segments**, not filesystem paths:
## Handling Hard Navigation ## Handling Hard Navigation
When users directly visit `/photos/123` (bookmark, refresh, shared link): When users directly visit `/photos/123` (bookmark, refresh, shared link):
- The intercepting route is bypassed - The intercepting route is bypassed
- The full `photos/[id]/page.tsx` renders - The full `photos/[id]/page.tsx` renders
- Modal doesn't appear (expected behavior) - Modal doesn't appear (expected behavior)
@@ -222,7 +230,6 @@ app/
### 4. Intercepted Route Shows Wrong Content ### 4. Intercepted Route Shows Wrong Content
Check your matcher: Check your matcher:
- `(.)photos` intercepts `/photos` from the same route level - `(.)photos` intercepts `/photos` from the same route level
- If your `@modal` is in `app/dashboard/@modal`, use `(.)photos` to intercept `/dashboard/photos`, not `/photos` - If your `@modal` is in `app/dashboard/@modal`, use `(.)photos` to intercept `/dashboard/photos`, not `/photos`
@@ -265,7 +272,7 @@ export default async function Gallery() {
return ( return (
<div className="grid grid-cols-3 gap-4"> <div className="grid grid-cols-3 gap-4">
{photos.map((photo) => ( {photos.map(photo => (
<Link key={photo.id} href={`/photos/${photo.id}`}> <Link key={photo.id} href={`/photos/${photo.id}`}>
<img src={photo.thumbnail} alt={photo.title} /> <img src={photo.thumbnail} alt={photo.title} />
</Link> </Link>
@@ -7,14 +7,14 @@ Create API endpoints with `route.ts` files.
```tsx ```tsx
// app/api/users/route.ts // app/api/users/route.ts
export async function GET() { export async function GET() {
const users = await getUsers(); const users = await getUsers()
return Response.json(users); return Response.json(users)
} }
export async function POST(request: Request) { export async function POST(request: Request) {
const body = await request.json(); const body = await request.json()
const user = await createUser(body); const user = await createUser(body)
return Response.json(user, { status: 201 }); return Response.json(user, { status: 201 })
} }
``` ```
@@ -60,11 +60,11 @@ Route handlers run in a **Server Component-like environment**:
```tsx ```tsx
// Bad: This won't work - no React DOM in route handlers // Bad: This won't work - no React DOM in route handlers
import { renderToString } from 'react-dom/server'; import { renderToString } from 'react-dom/server'
export async function GET() { export async function GET() {
const html = renderToString(<Component />); // Error! const html = renderToString(<Component />) // Error!
return new Response(html); return new Response(html)
} }
``` ```
@@ -72,15 +72,18 @@ export async function GET() {
```tsx ```tsx
// app/api/users/[id]/route.ts // app/api/users/[id]/route.ts
export async function GET(request: Request, { params }: { params: Promise<{ id: string }> }) { export async function GET(
const { id } = await params; request: Request,
const user = await getUser(id); { params }: { params: Promise<{ id: string }> }
) {
const { id } = await params
const user = await getUser(id)
if (!user) { if (!user) {
return Response.json({ error: 'Not found' }, { status: 404 }); return Response.json({ error: 'Not found' }, { status: 404 })
} }
return Response.json(user); return Response.json(user)
} }
``` ```
@@ -89,17 +92,17 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
```tsx ```tsx
export async function GET(request: Request) { export async function GET(request: Request) {
// URL and search params // URL and search params
const { searchParams } = new URL(request.url); const { searchParams } = new URL(request.url)
const query = searchParams.get('q'); const query = searchParams.get('q')
// Headers // Headers
const authHeader = request.headers.get('authorization'); const authHeader = request.headers.get('authorization')
// Cookies (Next.js helper) // Cookies (Next.js helper)
const cookieStore = await cookies(); const cookieStore = await cookies()
const token = cookieStore.get('token'); const token = cookieStore.get('token')
return Response.json({ query, token }); return Response.json({ query, token })
} }
``` ```
@@ -107,31 +110,31 @@ export async function GET(request: Request) {
```tsx ```tsx
// JSON response // JSON response
return Response.json({ data }); return Response.json({ data })
// With status // With status
return Response.json({ error: 'Not found' }, { status: 404 }); return Response.json({ error: 'Not found' }, { status: 404 })
// With headers // With headers
return Response.json(data, { return Response.json(data, {
headers: { headers: {
'Cache-Control': 'max-age=3600', 'Cache-Control': 'max-age=3600',
}, },
}); })
// Redirect // Redirect
return Response.redirect(new URL('/login', request.url)); return Response.redirect(new URL('/login', request.url))
// Stream // Stream
return new Response(stream, { return new Response(stream, {
headers: { 'Content-Type': 'text/event-stream' }, headers: { 'Content-Type': 'text/event-stream' },
}); })
``` ```
## When to Use Route Handlers vs Server Actions ## When to Use Route Handlers vs Server Actions
| Use Case | Route Handlers | Server Actions | | Use Case | Route Handlers | Server Actions |
| ------------------------ | -------------- | -------------- | |----------|----------------|----------------|
| Form submissions | No | Yes | | Form submissions | No | Yes |
| Data mutations from UI | No | Yes | | Data mutations from UI | No | Yes |
| Third-party webhooks | Yes | No | | Third-party webhooks | Yes | No |
@@ -12,33 +12,33 @@ Client components **cannot** be async functions. Only Server Components can be a
```tsx ```tsx
// Bad: async client component // Bad: async client component
'use client'; 'use client'
export default async function UserProfile() { export default async function UserProfile() {
const user = await getUser(); // Cannot await in client component const user = await getUser() // Cannot await in client component
return <div>{user.name}</div>; return <div>{user.name}</div>
} }
// Good: Remove async, fetch data in parent server component // Good: Remove async, fetch data in parent server component
// page.tsx (server component - no 'use client') // page.tsx (server component - no 'use client')
export default async function Page() { export default async function Page() {
const user = await getUser(); const user = await getUser()
return <UserProfile user={user} />; return <UserProfile user={user} />
} }
// UserProfile.tsx (client component) // UserProfile.tsx (client component)
('use client'); 'use client'
export function UserProfile({ user }: { user: User }) { export function UserProfile({ user }: { user: User }) {
return <div>{user.name}</div>; return <div>{user.name}</div>
} }
``` ```
```tsx ```tsx
// Bad: async arrow function client component // Bad: async arrow function client component
'use client'; 'use client'
const Dashboard = async () => { const Dashboard = async () => {
const data = await fetchDashboard(); const data = await fetchDashboard()
return <div>{data}</div>; return <div>{data}</div>
}; }
// Good: Fetch in server component, pass data down // Good: Fetch in server component, pass data down
``` ```
@@ -48,7 +48,6 @@ const Dashboard = async () => {
Props passed from Server → Client must be JSON-serializable. Props passed from Server → Client must be JSON-serializable.
**Detect:** Server component passes these to a client component: **Detect:** Server component passes these to a client component:
- Functions (except Server Actions with `'use server'`) - Functions (except Server Actions with `'use server'`)
- `Date` objects - `Date` objects
- `Map`, `Set`, `WeakMap`, `WeakSet` - `Map`, `Set`, `WeakMap`, `WeakSet`
@@ -60,16 +59,16 @@ Props passed from Server → Client must be JSON-serializable.
// Bad: Function prop // Bad: Function prop
// page.tsx (server) // page.tsx (server)
export default function Page() { export default function Page() {
const handleClick = () => console.log('clicked'); const handleClick = () => console.log('clicked')
return <ClientButton onClick={handleClick} />; return <ClientButton onClick={handleClick} />
} }
// Good: Define function inside client component // Good: Define function inside client component
// ClientButton.tsx // ClientButton.tsx
('use client'); 'use client'
export function ClientButton() { export function ClientButton() {
const handleClick = () => console.log('clicked'); const handleClick = () => console.log('clicked')
return <button onClick={handleClick}>Click</button>; return <button onClick={handleClick}>Click</button>
} }
``` ```
@@ -77,28 +76,28 @@ export function ClientButton() {
// Bad: Date object (silently becomes string, then crashes) // Bad: Date object (silently becomes string, then crashes)
// page.tsx (server) // page.tsx (server)
export default async function Page() { export default async function Page() {
const post = await getPost(); const post = await getPost()
return <PostCard createdAt={post.createdAt} />; // Date object return <PostCard createdAt={post.createdAt} /> // Date object
} }
// PostCard.tsx (client) - will crash on .getFullYear() // PostCard.tsx (client) - will crash on .getFullYear()
('use client'); 'use client'
export function PostCard({ createdAt }: { createdAt: Date }) { export function PostCard({ createdAt }: { createdAt: Date }) {
return <span>{createdAt.getFullYear()}</span>; // Runtime error! return <span>{createdAt.getFullYear()}</span> // Runtime error!
} }
// Good: Serialize to string on server // Good: Serialize to string on server
// page.tsx (server) // page.tsx (server)
export default async function Page() { export default async function Page() {
const post = await getPost(); const post = await getPost()
return <PostCard createdAt={post.createdAt.toISOString()} />; return <PostCard createdAt={post.createdAt.toISOString()} />
} }
// PostCard.tsx (client) // PostCard.tsx (client)
('use client'); 'use client'
export function PostCard({ createdAt }: { createdAt: string }) { export function PostCard({ createdAt }: { createdAt: string }) {
const date = new Date(createdAt); const date = new Date(createdAt)
return <span>{date.getFullYear()}</span>; return <span>{date.getFullYear()}</span>
} }
``` ```
@@ -128,28 +127,28 @@ Functions marked with `'use server'` CAN be passed to client components.
```tsx ```tsx
// Valid: Server Action can be passed // Valid: Server Action can be passed
// actions.ts // actions.ts
'use server'; 'use server'
export async function submitForm(formData: FormData) { export async function submitForm(formData: FormData) {
// server-side logic // server-side logic
} }
// page.tsx (server) // page.tsx (server)
import { submitForm } from './actions'; import { submitForm } from './actions'
export default function Page() { export default function Page() {
return <ClientForm onSubmit={submitForm} />; // OK! return <ClientForm onSubmit={submitForm} /> // OK!
} }
// ClientForm.tsx (client) // ClientForm.tsx (client)
('use client'); 'use client'
export function ClientForm({ onSubmit }: { onSubmit: (data: FormData) => Promise<void> }) { export function ClientForm({ onSubmit }: { onSubmit: (data: FormData) => Promise<void> }) {
return <form action={onSubmit}>...</form>; return <form action={onSubmit}>...</form>
} }
``` ```
## Quick Reference ## Quick Reference
| Pattern | Valid? | Fix | | Pattern | Valid? | Fix |
| --------------------------------- | ------ | ------------------------------------- | |---------|--------|-----|
| `'use client'` + `async function` | No | Fetch in server parent, pass data | | `'use client'` + `async function` | No | Fetch in server parent, pass data |
| Pass `() => {}` to client | No | Define in client or use server action | | Pass `() => {}` to client | No | Define in client or use server action |
| Pass `new Date()` to client | No | Use `.toISOString()` | | Pass `new Date()` to client | No | Use `.toISOString()` |
@@ -32,7 +32,6 @@ export const runtime = 'edge'
## Detection ## Detection
**Before adding `runtime = 'edge'`**, check: **Before adding `runtime = 'edge'`**, check:
1. Does the project already use Edge runtime? 1. Does the project already use Edge runtime?
2. Is there a specific latency requirement? 2. Is there a specific latency requirement?
3. Are all dependencies Edge-compatible? 3. Are all dependencies Edge-compatible?
+14 -10
View File
@@ -8,12 +8,12 @@ Always use `next/script` instead of native `<script>` tags for better performanc
```tsx ```tsx
// Bad: Native script tag // Bad: Native script tag
<script src="https://example.com/script.js"></script>; <script src="https://example.com/script.js"></script>
// Good: Next.js Script component // Good: Next.js Script component
import Script from 'next/script'; import Script from 'next/script'
<Script src="https://example.com/script.js" />; <Script src="https://example.com/script.js" />
``` ```
## Inline Scripts Need ID ## Inline Scripts Need ID
@@ -100,7 +100,7 @@ export default function Layout({ children }) {
## Google Tag Manager ## Google Tag Manager
```tsx ```tsx
import { GoogleTagManager } from '@next/third-parties/google'; import { GoogleTagManager } from '@next/third-parties/google'
export default function Layout({ children }) { export default function Layout({ children }) {
return ( return (
@@ -108,7 +108,7 @@ export default function Layout({ children }) {
<GoogleTagManager gtmId="GTM-XXXXX" /> <GoogleTagManager gtmId="GTM-XXXXX" />
<body>{children}</body> <body>{children}</body>
</html> </html>
); )
} }
``` ```
@@ -116,20 +116,24 @@ export default function Layout({ children }) {
```tsx ```tsx
// YouTube embed // YouTube embed
import { YouTubeEmbed } from '@next/third-parties/google'; import { YouTubeEmbed } from '@next/third-parties/google'
<YouTubeEmbed videoid="dQw4w9WgXcQ" />; <YouTubeEmbed videoid="dQw4w9WgXcQ" />
// Google Maps // Google Maps
import { GoogleMapsEmbed } from '@next/third-parties/google'; import { GoogleMapsEmbed } from '@next/third-parties/google'
<GoogleMapsEmbed apiKey="YOUR_API_KEY" mode="place" q="Brooklyn+Bridge,New+York,NY" />; <GoogleMapsEmbed
apiKey="YOUR_API_KEY"
mode="place"
q="Brooklyn+Bridge,New+York,NY"
/>
``` ```
## Quick Reference ## Quick Reference
| Pattern | Issue | Fix | | Pattern | Issue | Fix |
| --------------------------------------------- | -------------------------- | ------------------------- | |---------|-------|-----|
| `<script src="...">` | No optimization | Use `next/script` | | `<script src="...">` | No optimization | Use `next/script` |
| `<Script>` without id | Can't track inline scripts | Add `id` attribute | | `<Script>` without id | Can't track inline scripts | Add `id` attribute |
| `<Script>` inside `<Head>` | Wrong placement | Move outside Head | | `<Script>` inside `<Head>` | Wrong placement | Move outside Head |
@@ -77,12 +77,12 @@ services:
web: web:
build: . build: .
ports: ports:
- '3000:3000' - "3000:3000"
environment: environment:
- NODE_ENV=production - NODE_ENV=production
restart: unless-stopped restart: unless-stopped
healthcheck: healthcheck:
test: ['CMD', 'wget', '-q', '--spider', 'http://localhost:3000/api/health'] test: ["CMD", "wget", "-q", "--spider", "http://localhost:3000/api/health"]
interval: 30s interval: 30s
timeout: 10s timeout: 10s
retries: 3 retries: 3
@@ -95,8 +95,7 @@ For traditional server deployments:
```js ```js
// ecosystem.config.js // ecosystem.config.js
module.exports = { module.exports = {
apps: [ apps: [{
{
name: 'nextjs', name: 'nextjs',
script: '.next/standalone/server.js', script: '.next/standalone/server.js',
instances: 'max', instances: 'max',
@@ -105,8 +104,7 @@ module.exports = {
NODE_ENV: 'production', NODE_ENV: 'production',
PORT: 3000, PORT: 3000,
}, },
}, }],
],
}; };
``` ```
@@ -170,7 +168,11 @@ module.exports = class CacheHandler {
// Set TTL based on revalidate option // Set TTL based on revalidate option
if (ctx?.revalidate) { if (ctx?.revalidate) {
await redis.setex(CACHE_PREFIX + key, ctx.revalidate, JSON.stringify(cacheData)); await redis.setex(
CACHE_PREFIX + key,
ctx.revalidate,
JSON.stringify(cacheData)
);
} else { } else {
await redis.set(CACHE_PREFIX + key, JSON.stringify(cacheData)); await redis.set(CACHE_PREFIX + key, JSON.stringify(cacheData));
} }
@@ -195,12 +197,10 @@ const BUCKET = process.env.CACHE_BUCKET;
module.exports = class CacheHandler { module.exports = class CacheHandler {
async get(key) { async get(key) {
try { try {
const response = await s3.send( const response = await s3.send(new GetObjectCommand({
new GetObjectCommand({
Bucket: BUCKET, Bucket: BUCKET,
Key: `cache/${key}`, Key: `cache/${key}`,
}) }));
);
const body = await response.Body.transformToString(); const body = await response.Body.transformToString();
return JSON.parse(body); return JSON.parse(body);
} catch (err) { } catch (err) {
@@ -210,8 +210,7 @@ module.exports = class CacheHandler {
} }
async set(key, data, ctx) { async set(key, data, ctx) {
await s3.send( await s3.send(new PutObjectCommand({
new PutObjectCommand({
Bucket: BUCKET, Bucket: BUCKET,
Key: `cache/${key}`, Key: `cache/${key}`,
Body: JSON.stringify({ Body: JSON.stringify({
@@ -219,8 +218,7 @@ module.exports = class CacheHandler {
lastModified: Date.now(), lastModified: Date.now(),
}), }),
ContentType: 'application/json', ContentType: 'application/json',
}) }));
);
} }
}; };
``` ```
@@ -228,7 +226,7 @@ module.exports = class CacheHandler {
## What Works vs What Needs Setup ## What Works vs What Needs Setup
| Feature | Single Instance | Multi-Instance | Notes | | Feature | Single Instance | Multi-Instance | Notes |
| -------------------- | --------------- | ------------------- | --------------------------- | |---------|----------------|----------------|-------|
| SSR | Yes | Yes | No special setup | | SSR | Yes | Yes | No special setup |
| SSG | Yes | Yes | Built at deploy time | | SSG | Yes | Yes | Built at deploy time |
| ISR | Yes | Needs cache handler | Filesystem cache breaks | | ISR | Yes | Needs cache handler | Filesystem cache breaks |
@@ -246,7 +244,6 @@ Next.js Image Optimization works out of the box but is CPU-intensive.
### Option 1: Built-in (Simple) ### Option 1: Built-in (Simple)
Works automatically, but consider: Works automatically, but consider:
- Set `deviceSizes` and `imageSizes` in config to limit variants - Set `deviceSizes` and `imageSizes` in config to limit variants
- Use `minimumCacheTTL` to reduce regeneration - Use `minimumCacheTTL` to reduce regeneration
@@ -320,7 +317,6 @@ npx @opennextjs/aws build
``` ```
Supports: Supports:
- AWS Lambda + CloudFront - AWS Lambda + CloudFront
- Cloudflare Workers - Cloudflare Workers
- Netlify Functions - Netlify Functions
@@ -8,27 +8,27 @@ Always requires Suspense boundary in static routes. Without it, the entire page
```tsx ```tsx
// Bad: Entire page becomes CSR // Bad: Entire page becomes CSR
'use client'; 'use client'
import { useSearchParams } from 'next/navigation'; import { useSearchParams } from 'next/navigation'
export default function SearchBar() { export default function SearchBar() {
const searchParams = useSearchParams(); const searchParams = useSearchParams()
return <div>Query: {searchParams.get('q')}</div>; return <div>Query: {searchParams.get('q')}</div>
} }
``` ```
```tsx ```tsx
// Good: Wrap in Suspense // Good: Wrap in Suspense
import { Suspense } from 'react'; import { Suspense } from 'react'
import SearchBar from './search-bar'; import SearchBar from './search-bar'
export default function Page() { export default function Page() {
return ( return (
<Suspense fallback={<div>Loading...</div>}> <Suspense fallback={<div>Loading...</div>}>
<SearchBar /> <SearchBar />
</Suspense> </Suspense>
); )
} }
``` ```
@@ -39,12 +39,12 @@ Requires Suspense boundary when route has dynamic parameters.
```tsx ```tsx
// In dynamic route [slug] // In dynamic route [slug]
// Bad: No Suspense // Bad: No Suspense
'use client'; 'use client'
import { usePathname } from 'next/navigation'; import { usePathname } from 'next/navigation'
export function Breadcrumb() { export function Breadcrumb() {
const pathname = usePathname(); const pathname = usePathname()
return <nav>{pathname}</nav>; return <nav>{pathname}</nav>
} }
``` ```
@@ -60,7 +60,7 @@ If you use `generateStaticParams`, Suspense is optional.
## Quick Reference ## Quick Reference
| Hook | Suspense Required | | Hook | Suspense Required |
| ------------------- | -------------------- | |------|-------------------|
| `useSearchParams()` | Yes | | `useSearchParams()` | Yes |
| `usePathname()` | Yes (dynamic routes) | | `usePathname()` | Yes (dynamic routes) |
| `useParams()` | No | | `useParams()` | No |
@@ -1,100 +0,0 @@
# Two-Phase File Upload (Frontend)
Pair with [backend two-phase upload rule](../nestjs-best-practices/rules/security-file-two-phase-upload.md).
## Flow
```
User drops file
→ POST /files/upload (temp) → { tempId, expiresAt }
→ store tempId in form state
→ user submits form
→ POST /correspondences (with tempFileIds) → backend commits in transaction
```
## Hook Pattern
```tsx
'use client';
import { useDropzone } from 'react-dropzone';
import { useMutation } from '@tanstack/react-query';
export function useTwoPhaseUpload() {
const uploadTemp = useMutation({
mutationFn: async (file: File) => {
const fd = new FormData();
fd.append('file', file);
const { data } = await apiClient.post<{ tempId: string; expiresAt: string }>(
'/files/upload',
fd,
);
return data;
},
});
return uploadTemp;
}
```
## Form Integration (RHF)
```tsx
export function CorrespondenceForm() {
const form = useForm<FormData>({ resolver: zodResolver(schema) });
const uploadTemp = useTwoPhaseUpload();
const [tempFileIds, setTempFileIds] = useState<string[]>([]);
const { getRootProps, getInputProps } = useDropzone({
accept: {
'application/pdf': ['.pdf'],
'image/vnd.dwg': ['.dwg'],
'application/vnd.openxmlformats-officedocument.wordprocessingml.document': ['.docx'],
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet': ['.xlsx'],
'application/zip': ['.zip'],
},
maxSize: 50 * 1024 * 1024, // 50 MB — must match backend
onDrop: async (files) => {
const results = await Promise.all(files.map((f) => uploadTemp.mutateAsync(f)));
setTempFileIds((prev) => [...prev, ...results.map((r) => r.tempId)]);
},
});
const onSubmit = async (values: FormData) => {
await correspondenceService.create({
...values,
tempFileIds, // committed server-side in the same DB transaction
});
setTempFileIds([]);
};
return (
<form onSubmit={form.handleSubmit(onSubmit)}>
<div {...getRootProps()} className="dropzone">
<input {...getInputProps()} />
<p>{t('upload.dragDrop')}</p>
</div>
{/* other fields */}
</form>
);
}
```
## Rules
- **Whitelist MIME types** — must mirror backend ADR-016 whitelist (`.pdf`, `.dwg`, `.docx`, `.xlsx`, `.zip`).
- **50 MB cap** — enforce client-side too (better UX) plus server-side (authoritative).
- **Show temp-file pills** with remove button — users see what will be attached.
- **Clear `tempFileIds` on success/cancel** — prevent stale IDs on subsequent submits.
- **No retry of expired temps** — if `expiresAt` passed, prompt re-upload.
## ❌ Forbidden
- ❌ Uploading directly to permanent storage endpoint (no commit phase)
- ❌ Hardcoded MIME list in component (keep in shared constant file mirrored from backend)
- ❌ Ignoring `maxSize` — backend will reject but UX suffers
## Reference
- [ADR-016 Security](../../../specs/06-Decision-Records/ADR-016-security-authentication.md)
- Backend rule: [`security-file-two-phase-upload.md`](../nestjs-best-practices/rules/security-file-two-phase-upload.md)
@@ -1,257 +0,0 @@
# UUID Handling (ADR-019) — March 2026 Pattern
**Project-specific: Hybrid Identifier Strategy for NAP-DMS**
This project uses ADR-019: INT Primary Key (internal) + UUIDv7 (public API). Frontend code must handle this correctly.
> **Updated pattern:** Backend exposes `publicId` directly — ไม่มี `@Expose({ name: 'id' })` rename แล้ว. Frontend ใช้ `publicId` ตรงๆ — ห้าม fallback ไป `id`.
## The Pattern
| Source | Field Name | Type | Notes |
| ------------------------ | ------------------- | ----------------- | ----------------------------------------------------------- |
| **API Response** | `publicId` | `string` (UUIDv7) | Exposed directly (no rename) |
| **TypeScript Interface** | `publicId?: string` | UUID string | ใช้ตัวนี้เท่านั้น |
| **Form DTO** | `xxxUuid` | `string` | DTO field names: `projectUuid`, `contractUuid` (input only) |
| **URL param** | `[publicId]` | `string` (UUID) | e.g. `/correspondences/[publicId]/page.tsx` |
## Critical Rules
### 1. NEVER Use `parseInt()` on UUID
```tsx
// ❌ WRONG - parseInt on UUID gives garbage
const id = parseInt(projectId); // "0195a1b2-..." → 195 (wrong!)
// ❌ WRONG - Number() on UUID
const id = Number(projectId); // NaN
// ❌ WRONG - Unary plus
const id = +projectId; // NaN
// ✅ CORRECT - Send UUID string directly to API
apiClient.get(`/projects/${projectId}`); // projectId is already UUID string
```
### 2. Use `publicId` Only — NO `id ?? ''` Fallback
```tsx
// ✅ CORRECT — types/project.ts
interface Project {
publicId?: string; // UUID from API — ใช้ตัวนี้เท่านั้น
projectCode: string;
projectName: string;
}
// ✅ CORRECT — Component usage
const projectOptions = projects.map((p) => ({
label: `${p.projectName} (${p.projectCode})`,
value: p.publicId ?? '', // ADR-019 — ไม่ต้อง String() และไม่ไป id
key: p.publicId ?? p.projectCode, // fallback ไป business field ได้
}));
// ❌ WRONG — pattern เก่า
const oldOptions = projects.map((p) => ({
value: String(p.publicId ?? p.id ?? ''), // ❌ `id ?? ''` fallback
}));
```
### 3. Form Field Names (camelCase)
```tsx
// ❌ WRONG - snake_case doesn't match TypeScript interface
fields={[{ name: 'project_id', label: 'Project' }]}
// ✅ CORRECT - camelCase matches interface
fields={[{ name: 'projectUuid', label: 'Project' }]}
// Form submission
const onSubmit = (data: { projectUuid: string }) => {
// projectUuid is UUID string - send as-is
await apiClient.post('/contracts', data);
};
```
## Select Component Pattern
```tsx
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from '@/components/ui/select';
interface ContractSelectProps {
contracts: Contract[];
value: string;
onChange: (value: string) => void;
}
export function ContractSelect({ contracts, value, onChange }: ContractSelectProps) {
return (
<Select value={value} onValueChange={onChange}>
<SelectTrigger>
<SelectValue placeholder="เลือกสัญญา" />
</SelectTrigger>
<SelectContent>
{contracts
.filter((c) => !!c.publicId) // กรอง contract ที่มี publicId เท่านั้น
.map((c) => (
<SelectItem key={c.publicId} value={c.publicId!}>
{c.contractName} ({c.contractCode})
</SelectItem>
))}
</SelectContent>
</Select>
);
}
```
## Data Table Pattern
```tsx
// Show relation columns with UUID entities
const columns: ColumnDef<Discipline>[] = [
{
accessorKey: 'disciplineCode',
header: 'Code',
},
{
accessorKey: 'contract',
header: 'Contract',
cell: ({ row }) => {
const contract = row.original.contract;
return contract ? (
<span>
{contract.contractName} ({contract.contractCode})
</span>
) : (
<span className="text-muted-foreground">-</span>
);
},
},
];
```
## API Service Pattern
```tsx
// lib/services/contract.service.ts
export const contractService = {
async getById(uuid: string): Promise<Contract> {
// Send UUID string directly - backend resolves to INT
const { data } = await apiClient.get(`/contracts/${uuid}`);
return data;
},
async create(dto: CreateContractDto): Promise<Contract> {
// DTO contains projectUuid (UUID string)
const { data } = await apiClient.post('/contracts', dto);
return data;
},
async update(uuid: string, dto: Partial<CreateContractDto>): Promise<Contract> {
const { data } = await apiClient.put(`/contracts/${uuid}`, dto);
return data;
},
async delete(uuid: string): Promise<void> {
await apiClient.delete(`/contracts/${uuid}`);
},
};
```
## TypeScript Interfaces
```tsx
// ✅ CORRECT — types/entities.ts
export interface BaseEntity {
publicId?: string; // UUID — ใช้ตัวนี้เท่านั้น (ไม่มี INT id ใน interface)
createdAt?: string;
updatedAt?: string;
}
export interface Project extends BaseEntity {
projectCode: string;
projectName: string;
description?: string;
}
export interface Contract extends BaseEntity {
contractCode: string;
contractName: string;
project?: Project; // Relation (nested entity)
}
// DTO (input only — รับ UUID จาก form)
export interface CreateContractDto {
projectUuid: string; // UUID string from select
contractCode: string;
contractName: string;
}
```
## Form with React Hook Form + Zod
```tsx
import { useForm } from 'react-hook-form';
import { zodResolver } from '@hookform/resolvers/zod';
import * as z from 'zod';
const formSchema = z.object({
projectUuid: z.string().uuid('กรุณาเลือกโปรเจกต์'),
contractCode: z.string().min(1, 'กรุณาระบุรหัสสัญญา'),
contractName: z.string().min(1, 'กรุณาระบุชื่อสัญญา'),
});
type FormData = z.infer<typeof formSchema>;
export function ContractForm() {
const form = useForm<FormData>({
resolver: zodResolver(formSchema),
defaultValues: {
projectUuid: '',
contractCode: '',
contractName: '',
},
});
const onSubmit = async (data: FormData) => {
// Send UUID strings directly
await contractService.create(data);
};
return (
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)}>{/* Form fields */}</form>
</Form>
);
}
```
## URL Parameters
```tsx
// app/contracts/[id]/page.tsx
export default async function ContractPage({ params }: { params: Promise<{ id: string }> }) {
const { id } = await params;
// id is UUID string from URL
const contract = await contractService.getById(id);
return <ContractDetail contract={contract} />;
}
```
## Common Pitfalls
| Pitfall | ❌ Wrong | ✅ Right |
| ---------------------------- | ------------------------------------------------ | --------------------------------- |
| Using INT `id` | `key={entity.id}` | `key={entity.publicId}` |
| parseInt on UUID | `parseInt(projectId)` | `projectId` (string) |
| Field name mismatch | `name="project_id"` | `name="projectUuid"` |
| `id ?? ''` fallback | `value={publicId ?? id ?? ''}` | `value={publicId ?? ''}` |
| `uuid` + `publicId` together | `interface { uuid?: string; publicId?: string }` | `interface { publicId?: string }` |
## Reference
- [ADR-019 Hybrid Identifier Strategy](../../../../specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md)
- [Frontend Guidelines](../../../../specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md)
- [UUID Implementation Plan](../../../../specs/05-Engineering-Guidelines/05-07-hybrid-uuid-implementation-plan.md)
> **Warning**: Using `parseInt()` on UUID values causes data corruption. Always use UUID strings directly in API calls.
-108
View File
@@ -1,108 +0,0 @@
# 🧠 NAP-DMS Agent Skills (v1.8.9)
ไฟล์นี้กำหนดทักษะและความสามารถเฉพาะทางของ Document Intelligence Engine สำหรับโครงการ LCBP3 v1.8.9 เพื่อรักษามาตรฐานสูงสุดด้าน Security และ Data Integrity
**Status**: Production Ready | **Last Updated**: 2026-04-22 | **Total Skills**: 20
> 📌 Shared context for all speckit-\* skills: see [`_LCBP3-CONTEXT.md`](./_LCBP3-CONTEXT.md).
---
## 🏗️ Architectural & Data Integrity
- **Identifier Strategy Mastery (ADR-019 — March 2026):**
- บังคับใช้ **UUIDv7** เป็น Public ID; entity สืบทอดจาก `UuidBaseEntity` และเปิด `publicId` **ตรงๆ** (ห้ามใช้ `@Expose({ name: 'id' })` rename)
- ตรวจสอบและป้องกันการใช้ `parseInt()`, `Number()`, หรือ `+` กับ UUID ทั้ง backend/frontend
- ตรวจสอบว่า Entity มีการใช้ `@Exclude()` บน Primary Key `INT AUTO_INCREMENT` เพื่อไม่ให้หลุดออกไปยัง API
- Frontend ใช้ `publicId` ตรงๆ — **ห้าม** `id ?? ''` fallback หรือมี `uuid?: string` คู่กับ `publicId` ใน interface
- **Strict Validation Engine:**
- บังคับใช้ **Zod** สำหรับการทำ Form Validation ฝั่ง Frontend
- บังคับใช้ **class-validator** สำหรับ Backend DTOs
- ตรวจสอบการส่ง **Idempotency-Key** ใน Header สำหรับทุก Mutation Request (POST/PUT/PATCH)
## ⚙️ Workflow & Concurrency Control
- **DMS Workflow Engine Proficiency:**
- มีความเชี่ยวชาญใน **DSL-based state machines**; ตรวจสอบทุกการเปลี่ยนสถานะเอกสารเทียบกับกฎใน DSL Parser เสมอ
- ป้องกันการอนุมัติซ้ำซ้อนโดยการตรวจสอบสถานะปัจจุบันจากฐานข้อมูลก่อนเริ่ม Logic การเปลี่ยน State ทุกครั้ง
- **Collision-Free Numbering (ADR-002):**
- ใช้ทักษะการทำ **Distributed Locking** ผ่าน **Redis Redlock** ร่วมกับ TypeORM `@VersionColumn` สำหรับการเจนเลขที่เอกสาร (Document Numbering)
- ห้ามเจนเลขโดยใช้ Logic ฝั่ง Application เพียงอย่างเดียวเด็ดขาด
- **Asynchronous Task Orchestration (ADR-008):**
- แยกงานที่ใช้เวลานาน (เช่น การส่ง Notification, การทำ Correspondence Routing) ไปทำที่ **BullMQ** เท่านั้น
## 🛡️ Security & Integrity Audit
- **RBAC Matrix Enforcement (ADR-016):**
- บังคับใช้ **JwtAuthGuard**, **RolesGuard** และ **CASL AbilityFactory** ในทุก Controller ใหม่
- ตรวจสอบการมีอยู่ของ `AuditLogInterceptor` สำหรับทุก API ที่มีการเปลี่ยนแปลงข้อมูล
- **Secure File Lifecycle:**
- ใช้ Logic **Two-Phase Upload**: Upload → Temp → ClamAV Scan → Commit → Permanent
- บังคับใช้ Whitelist File Extension และ Max Size 50MB ตามที่กำหนดใน ADR-016
## 🤖 AI Boundary & Privacy (ADR-018/020)
- **Data Isolation:**
- รับรองว่าฟีเจอร์ AI จะรันผ่าน **Ollama (On-premises)** เท่านั้น และไม่ส่งข้อมูลออกนอกเน็ตเวิร์ก
- AI จะเข้าถึงข้อมูลผ่าน **DMS API** เท่านั้น (ห้ามต่อ Database หรือ Storage โดยตรง)
- **Human-in-the-loop Validation:**
- ออกแบบให้ผลลัพธ์จาก AI (เช่น การดึง Metadata เอกสาร) ต้องผ่านการยืนยันจาก User ก่อนบันทึกลงระบบเสมอ
## 🏷️ Domain Terminology Consistency
- **Term Correction:** แก้ไขคำศัพท์ให้ถูกต้องตาม Glossary ทันที (เช่น เปลี่ยน Letter เป็น **Correspondence**, Approval Flow เป็น **Workflow Engine**)
- **i18n Guidelines:** ห้ามเขียน Thai/English String ลงใน Component โดยตรง ต้องใช้ i18n Keys เท่านั้น
---
## 🔄 Skill Dependency Matrix
| Skill | Dependencies | Handoffs To | Notes |
| -------------------------- | -------------------- | -------------------------------- | ----------------------------- |
| **speckit-constitution** | None | speckit-specify | Project governance foundation |
| **speckit-specify** | speckit-constitution | speckit-clarify | Feature specification |
| **speckit-clarify** | speckit-specify | speckit-plan | Resolve ambiguities |
| **speckit-plan** | speckit-clarify | speckit-tasks, speckit-checklist | Technical design |
| **speckit-tasks** | speckit-plan | speckit-implement | Task breakdown |
| **speckit-implement** | speckit-tasks | speckit-checker | Code implementation |
| **speckit-checker** | speckit-implement | speckit-tester | Static analysis |
| **speckit-tester** | speckit-checker | speckit-reviewer | Test execution |
| **speckit-reviewer** | speckit-tester | speckit-validate | Code review |
| **speckit-validate** | speckit-reviewer | None | Requirements validation |
| **speckit-analyze** | speckit-tasks | None | Cross-artifact consistency |
| **speckit-migrate** | None | speckit-plan | Legacy code import |
| **speckit-quizme** | speckit-specify | speckit-plan | Logic validation |
| **speckit-diff** | None | speckit-plan | Version comparison |
| **speckit-status** | None | None | Progress tracking |
| **speckit-taskstoissues** | speckit-tasks | None | Issue sync |
| **speckit-checklist** | speckit-plan | None | Requirements validation |
| **nestjs-best-practices** | None | speckit-implement | Backend patterns |
| **next-best-practices** | None | speckit-implement | Frontend patterns |
| **speckit-security-audit** | None | speckit-reviewer | Security validation |
---
## 🛠️ Skill Health Monitoring
### Health Check Scripts (from repo root)
- **Bash**: `./.agents/scripts/bash/audit-skills.sh` - Comprehensive skill health audit
- **PowerShell**: `./.agents/scripts/powershell/audit-skills.ps1` - Windows equivalent
### Validation Scripts
- **Version Check**: `./.agents/scripts/bash/validate-versions.sh` - Ensure version consistency
- **Workflow Sync**: `./.agents/scripts/bash/sync-workflows.sh` - Verify workflow integration
### Health Metrics
- **Total Skills**: 20 implemented
- **Version Alignment**: v1.8.9 across all skills
- **Template Coverage**: 100% for skills requiring templates
- **Documentation**: Complete front matter + shared `_LCBP3-CONTEXT.md` appendix
### Maintenance Schedule
- **Daily**: Run `audit-skills.sh` for health monitoring
- **Weekly**: Run `validate-versions.sh` for version consistency
- **Monthly**: Review skill dependencies and update documentation
+5 -18
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-analyze name: speckit-analyze
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation. description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
version: 1.8.9 version: 1.0.0
depends-on: depends-on:
- speckit-tasks - speckit-tasks
--- ---
@@ -21,14 +21,13 @@ You are the **Antigravity Consistency Analyst**. Your role is to identify incons
## Task ## Task
### Goal ### Goal
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit-tasks` has successfully produced a complete `tasks.md`. Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit-tasks` has successfully produced a complete `tasks.md`.
## Operating Constraints ## Operating Constraints
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually). **STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
**Constitution Authority**: The project constitution (`AGENTS.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit-analyze`. **Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit-analyze`.
### Steps ### Steps
@@ -72,7 +71,7 @@ Load only the minimal necessary context from each artifact:
**From constitution:** **From constitution:**
- Load `AGENTS.md` for principle validation - Load `.specify/memory/constitution.md` for principle validation
### 3. Build Semantic Models ### 3. Build Semantic Models
@@ -137,7 +136,7 @@ Output a Markdown report (no file writes) with the following structure:
## Specification Analysis Report ## Specification Analysis Report
| ID | Category | Severity | Location(s) | Summary | Recommendation | | ID | Category | Severity | Location(s) | Summary | Recommendation |
| --- | ----------- | -------- | ---------------- | ---------------------------- | ------------------------------------ | |----|----------|----------|-------------|---------|----------------|
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version | | A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
(Add one row per finding; generate stable IDs prefixed by category initial.) (Add one row per finding; generate stable IDs prefixed by category initial.)
@@ -145,7 +144,7 @@ Output a Markdown report (no file writes) with the following structure:
**Coverage Summary Table:** **Coverage Summary Table:**
| Requirement Key | Has Task? | Task IDs | Notes | | Requirement Key | Has Task? | Task IDs | Notes |
| --------------- | --------- | -------- | ----- | |-----------------|-----------|----------|-------|
**Constitution Alignment Issues:** (if any) **Constitution Alignment Issues:** (if any)
@@ -192,15 +191,3 @@ Ask the user: "Would you like me to suggest concrete remediation edits for the t
## Context ## Context
{{args}} {{args}}
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+14 -31
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-checker name: speckit-checker
description: Run static analysis tools and aggregate results. description: Run static analysis tools and aggregate results.
version: 1.8.9 version: 1.0.0
depends-on: [] depends-on: []
--- ---
@@ -26,7 +26,6 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
### Execution Steps ### Execution Steps
1. **Detect Project Type and Tools**: 1. **Detect Project Type and Tools**:
```bash ```bash
# Check for config files # Check for config files
ls -la | grep -E "(package.json|pyproject.toml|go.mod|Cargo.toml|pom.xml)" ls -la | grep -E "(package.json|pyproject.toml|go.mod|Cargo.toml|pom.xml)"
@@ -36,7 +35,7 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
``` ```
| Config | Tools to Run | | Config | Tools to Run |
| ---------------- | ----------------------------- | |--------|-------------|
| `package.json` | ESLint, TypeScript, npm audit | | `package.json` | ESLint, TypeScript, npm audit |
| `pyproject.toml` | Pylint/Ruff, mypy, bandit | | `pyproject.toml` | Pylint/Ruff, mypy, bandit |
| `go.mod` | golangci-lint, go vet | | `go.mod` | golangci-lint, go vet |
@@ -46,16 +45,16 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
2. **Run Linting**: 2. **Run Linting**:
| Stack | Command | | Stack | Command |
| ------- | ---------------------------------------------- | --- | ------------------------------------- | |-------|---------|
| Node/TS | `npx eslint . --format json 2>/dev/null` | | Node/TS | `npx eslint . --format json 2>/dev/null` |
| Python | `ruff check . --output-format json 2>/dev/null | | pylint --output-format=json \*_/_.py` | | Python | `ruff check . --output-format json 2>/dev/null || pylint --output-format=json **/*.py` |
| Go | `golangci-lint run --out-format json` | | Go | `golangci-lint run --out-format json` |
| Rust | `cargo clippy --message-format=json` | | Rust | `cargo clippy --message-format=json` |
3. **Run Type Checking**: 3. **Run Type Checking**:
| Stack | Command | | Stack | Command |
| ---------- | ------------------------------------------ | |-------|---------|
| TypeScript | `npx tsc --noEmit 2>&1` | | TypeScript | `npx tsc --noEmit 2>&1` |
| Python | `mypy . --no-error-summary 2>&1` | | Python | `mypy . --no-error-summary 2>&1` |
| Go | `go build ./... 2>&1` (types are built-in) | | Go | `go build ./... 2>&1` (types are built-in) |
@@ -63,7 +62,7 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
4. **Run Security Scanning**: 4. **Run Security Scanning**:
| Stack | Command | | Stack | Command |
| ------ | -------------------------------- | --- | -------------------- | |-------|---------|
| Node | `npm audit --json` | | Node | `npm audit --json` |
| Python | `bandit -r . -f json 2>/dev/null || safety check --json` | | Python | `bandit -r . -f json 2>/dev/null || safety check --json` |
| Go | `govulncheck ./... 2>&1` | | Go | `govulncheck ./... 2>&1` |
@@ -72,7 +71,7 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
5. **Aggregate and Prioritize**: 5. **Aggregate and Prioritize**:
| Category | Priority | | Category | Priority |
| ------------------------ | -------- | |----------|----------|
| Security (Critical/High) | 🔴 P1 | | Security (Critical/High) | 🔴 P1 |
| Type Errors | 🟠 P2 | | Type Errors | 🟠 P2 |
| Security (Medium/Low) | 🟡 P3 | | Security (Medium/Low) | 🟡 P3 |
@@ -81,8 +80,7 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
| Style Issues | ⚪ P5 | | Style Issues | ⚪ P5 |
6. **Generate Report**: 6. **Generate Report**:
```markdown
````markdown
# Static Analysis Report # Static Analysis Report
**Date**: [timestamp] **Date**: [timestamp]
@@ -92,7 +90,7 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
## Tools Run ## Tools Run
| Tool | Status | Issues | | Tool | Status | Issues |
| ---------- | ------ | ----------------- | |------|--------|--------|
| ESLint | ✅ | 12 | | ESLint | ✅ | 12 |
| TypeScript | ✅ | 3 | | TypeScript | ✅ | 3 |
| npm audit | ⚠️ | 2 vulnerabilities | | npm audit | ⚠️ | 2 vulnerabilities |
@@ -100,7 +98,7 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
## Summary by Priority ## Summary by Priority
| Priority | Count | | Priority | Count |
| -------------- | ----- | |----------|-------|
| 🔴 P1 Critical | X | | 🔴 P1 Critical | X |
| 🟠 P2 High | X | | 🟠 P2 High | X |
| 🟡 P3 Medium | X | | 🟡 P3 Medium | X |
@@ -111,19 +109,19 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
### 🔴 P1: Security Vulnerabilities ### 🔴 P1: Security Vulnerabilities
| Package | Severity | Issue | Fix | | Package | Severity | Issue | Fix |
| ------- | -------- | ------------------- | ------------------ | |---------|----------|-------|-----|
| lodash | HIGH | Prototype Pollution | Upgrade to 4.17.21 | | lodash | HIGH | Prototype Pollution | Upgrade to 4.17.21 |
### 🟠 P2: Type Errors ### 🟠 P2: Type Errors
| File | Line | Error | | File | Line | Error |
| ---------- | ---- | ------------------------------------------------ | |------|------|-------|
| src/api.ts | 45 | Type 'string' is not assignable to type 'number' | | src/api.ts | 45 | Type 'string' is not assignable to type 'number' |
### 🟡 P3: Lint Issues ### 🟡 P3: Lint Issues
| File | Line | Rule | Message | | File | Line | Rule | Message |
| ------------ | ---- | -------------- | ------------------------------- | |------|------|------|---------|
| src/utils.ts | 12 | no-unused-vars | 'foo' is defined but never used | | src/utils.ts | 12 | no-unused-vars | 'foo' is defined but never used |
## Quick Fixes ## Quick Fixes
@@ -135,15 +133,12 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
# Auto-fix lint issues # Auto-fix lint issues
npx eslint . --fix npx eslint . --fix
``` ```
````
## Recommendations ## Recommendations
1. **Immediate**: Fix P1 security issues 1. **Immediate**: Fix P1 security issues
2. **Before merge**: Fix P2 type errors 2. **Before merge**: Fix P2 type errors
3. **Tech debt**: Address P3/P4 lint issues 3. **Tech debt**: Address P3/P4 lint issues
```
``` ```
7. **Output**: 7. **Output**:
@@ -157,15 +152,3 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
- **Be Actionable**: Every issue should have a clear fix path - **Be Actionable**: Every issue should have a clear fix path
- **Don't Duplicate**: Dedupe issues found by multiple tools - **Don't Duplicate**: Dedupe issues found by multiple tools
- **Respect Configs**: Honor project's existing linter configs - **Respect Configs**: Honor project's existing linter configs
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+1 -13
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-checklist name: speckit-checklist
description: Generate a custom checklist for the current feature based on user requirements. description: Generate a custom checklist for the current feature based on user requirements.
version: 1.8.9 version: 1.0.0
--- ---
## Checklist Purpose: "Unit Tests for English" ## Checklist Purpose: "Unit Tests for English"
@@ -300,15 +300,3 @@ Sample items:
- Correct: Validation of requirement quality - Correct: Validation of requirement quality
- Wrong: "Does it do X?" - Wrong: "Does it do X?"
- Correct: "Is X clearly specified?" - Correct: "Is X clearly specified?"
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+4 -16
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-clarify name: speckit-clarify
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec. description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
version: 1.8.9 version: 1.0.0
depends-on: depends-on:
- speckit-specify - speckit-specify
handoffs: handoffs:
@@ -104,7 +104,7 @@ Execution steps:
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved. - Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness). - Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests. - Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
- If more than 5 categories remain unresolved, select the top 5 by (Impact \* Uncertainty) heuristic. - If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
4. Sequential questioning loop (interactive): 4. Sequential questioning loop (interactive):
- Present EXACTLY ONE question at a time. - Present EXACTLY ONE question at a time.
@@ -119,13 +119,13 @@ Execution steps:
- Then render all options as a Markdown table: - Then render all options as a Markdown table:
| Option | Description | | Option | Description |
| ------ | --------------------------------------------------------------------------------------------------- | |--------|-------------|
| A | <Option A description> | | A | <Option A description> |
| B | <Option B description> | | B | <Option B description> |
| C | <Option C description> (add D/E as needed up to 5) | | C | <Option C description> (add D/E as needed up to 5) |
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) | | Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
- For shortanswer style (no meaningful discrete options): - For shortanswer style (no meaningful discrete options):
- Provide your **suggested answer** based on best practices and context. - Provide your **suggested answer** based on best practices and context.
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>` - Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
@@ -189,15 +189,3 @@ Behavior rules:
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale. - If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
Context for prioritization: {{args}} Context for prioritization: {{args}}
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+9 -21
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-constitution name: speckit-constitution
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync. description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
version: 1.8.9 version: 1.0.0
handoffs: handoffs:
- label: Build Specification - label: Build Specification
agent: speckit-specify agent: speckit-specify
@@ -24,11 +24,11 @@ You are the **Antigravity Governance Architect**. Your role is to establish and
### Outline ### Outline
You are updating the project constitution at `AGENTS.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts. You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
Follow this execution flow: Follow this execution flow:
1. Load the existing constitution template at `AGENTS.md`. 1. Load the existing constitution template at `memory/constitution.md`.
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`. - Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly. **IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
@@ -49,10 +49,10 @@ Follow this execution flow:
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations. - Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
4. Consistency propagation checklist (convert prior checklist into active validations): 4. Consistency propagation checklist (convert prior checklist into active validations):
- Read `.agents/skills/speckit-plan/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles. - Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
- Read `.agents/skills/speckit-specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints. - Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
- Read `.agents/skills/speckit-tasks/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline). - Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
- Read each command file in `.agents/skills/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required. - Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed. - Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update): 5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
@@ -69,7 +69,7 @@ Follow this execution flow:
- Dates ISO format YYYY-MM-DD. - Dates ISO format YYYY-MM-DD.
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate). - Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
7. Write the completed constitution back to `AGENTS.md` (overwrite). 7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
8. Output a final summary to the user with: 8. Output a final summary to the user with:
- New version and bump rationale. - New version and bump rationale.
@@ -87,16 +87,4 @@ If the user supplies partial updates (e.g., only one principle revision), still
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items. If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
Do not create a new template; always operate on the existing `AGENTS.md` file. Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+2 -19
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-diff name: speckit-diff
description: Compare two versions of a spec or plan to highlight changes. description: Compare two versions of a spec or plan to highlight changes.
version: 1.8.9 version: 1.0.0
depends-on: [] depends-on: []
--- ---
@@ -31,12 +31,10 @@ Compare two versions of a specification artifact and produce a structured diff r
- If no arguments: Use `check-prerequisites.sh` to find current feature's spec.md and compare with HEAD - If no arguments: Use `check-prerequisites.sh` to find current feature's spec.md and compare with HEAD
2. **Load Files**: 2. **Load Files**:
```bash ```bash
# For git comparison # For git comparison
git show HEAD:<relative-path> > /tmp/old_version.md git show HEAD:<relative-path> > /tmp/old_version.md
``` ```
- Read both versions into memory - Read both versions into memory
3. **Semantic Diff Analysis**: 3. **Semantic Diff Analysis**:
@@ -47,7 +45,6 @@ Compare two versions of a specification artifact and produce a structured diff r
- **Moved**: Reorganized content (same meaning, different location) - **Moved**: Reorganized content (same meaning, different location)
4. **Generate Report**: 4. **Generate Report**:
```markdown ```markdown
# Diff Report: [filename] # Diff Report: [filename]
@@ -55,7 +52,6 @@ Compare two versions of a specification artifact and produce a structured diff r
**Date**: [timestamp] **Date**: [timestamp]
## Summary ## Summary
- X additions, Y removals, Z modifications - X additions, Y removals, Z modifications
## Changes by Section ## Changes by Section
@@ -63,13 +59,12 @@ Compare two versions of a specification artifact and produce a structured diff r
### [Section Name] ### [Section Name]
| Type | Content | Impact | | Type | Content | Impact |
| ---------- | ------------------ | ----------------- | |------|---------|--------|
| + Added | [new text] | [what this means] | | + Added | [new text] | [what this means] |
| - Removed | [old text] | [what this means] | | - Removed | [old text] | [what this means] |
| ~ Modified | [before] → [after] | [what this means] | | ~ Modified | [before] → [after] | [what this means] |
## Risk Assessment ## Risk Assessment
- Breaking changes: [list any] - Breaking changes: [list any]
- Scope changes: [list any] - Scope changes: [list any]
``` ```
@@ -84,15 +79,3 @@ Compare two versions of a specification artifact and produce a structured diff r
- **Highlight Impact**: Explain what each change means for implementation - **Highlight Impact**: Explain what each change means for implementation
- **Flag Breaking Changes**: Any change that invalidates existing work - **Flag Breaking Changes**: Any change that invalidates existing work
- **Ignore Whitespace**: Focus on semantic changes, not formatting - **Ignore Whitespace**: Focus on semantic changes, not formatting
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+8 -20
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-implement name: speckit-implement
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md (with Ironclad Anti-Regression Protocols) description: Execute the implementation plan by processing and executing all tasks defined in tasks.md (with Ironclad Anti-Regression Protocols)
version: 1.8.9 version: 1.0.0
depends-on: depends-on:
- speckit-tasks - speckit-tasks
--- ---
@@ -53,7 +53,7 @@ If a file is critical, complex, or has high dependencies (>2 affected files):
4. **SWITCH** the imports in the consuming files one by one. 4. **SWITCH** the imports in the consuming files one by one.
5. **ANNOUNCE**: "Applying Strangler Pattern to avoid regression." 5. **ANNOUNCE**: "Applying Strangler Pattern to avoid regression."
_Benefit: If it breaks, we simply revert the import, not the whole logic._ *Benefit: If it breaks, we simply revert the import, not the whole logic.*
### Protocol 3: Reproduction Script First (TDD) ### Protocol 3: Reproduction Script First (TDD)
@@ -81,7 +81,7 @@ At the start of execution and after every 3 modifications:
### Outline ### Outline
1. Run `../scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot"). 1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists): 2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
- Scan all checklist files in the checklists/ directory - Scan all checklist files in the checklists/ directory
@@ -136,12 +136,12 @@ At the start of execution and after every 3 modifications:
git rev-parse --git-dir 2>/dev/null git rev-parse --git-dir 2>/dev/null
``` ```
- Check if Dockerfile\* exists or Docker in plan.md → create/verify .dockerignore - Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
- Check if .eslintrc\* exists → create/verify .eslintignore - Check if .eslintrc* exists → create/verify .eslintignore
- Check if eslint.config.\* exists → ensure the config's `ignores` entries cover required patterns - Check if eslint.config.* exists → ensure the config's `ignores` entries cover required patterns
- Check if .prettierrc\* exists → create/verify .prettierignore - Check if .prettierrc* exists → create/verify .prettierignore
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing) - Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
- Check if terraform files (\*.tf) exist → create/verify .terraformignore - Check if terraform files (*.tf) exist → create/verify .terraformignore
- Check if .helmignore needed (helm charts present) → create/verify .helmignore - Check if .helmignore needed (helm charts present) → create/verify .helmignore
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only **If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
@@ -246,15 +246,3 @@ At the start of execution and after every 3 modifications:
--- ---
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit-tasks` first to regenerate the task list. Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit-tasks` first to regenerate the task list.
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+1 -25
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-migrate name: speckit-migrate
description: Migrate existing projects into the speckit structure by generating spec.md, plan.md, and tasks.md from existing code. description: Migrate existing projects into the speckit structure by generating spec.md, plan.md, and tasks.md from existing code.
version: 1.8.9 version: 1.0.0
depends-on: [] depends-on: []
--- ---
@@ -31,7 +31,6 @@ Analyze an existing codebase and generate speckit artifacts (spec.md, plan.md, t
- `--depth <n>`: Analysis depth (1=overview, 2=detailed, 3=exhaustive) - `--depth <n>`: Analysis depth (1=overview, 2=detailed, 3=exhaustive)
2. **Codebase Discovery**: 2. **Codebase Discovery**:
```bash ```bash
# Get project structure # Get project structure
tree -L 3 --dirsfirst -I 'node_modules|.git|dist|build' > /tmp/structure.txt tree -L 3 --dirsfirst -I 'node_modules|.git|dist|build' > /tmp/structure.txt
@@ -48,7 +47,6 @@ Analyze an existing codebase and generate speckit artifacts (spec.md, plan.md, t
- Map API endpoints (if applicable) - Map API endpoints (if applicable)
4. **Generate spec.md** (reverse-engineered): 4. **Generate spec.md** (reverse-engineered):
```markdown ```markdown
# [Feature Name] - Specification (Migrated) # [Feature Name] - Specification (Migrated)
@@ -56,38 +54,30 @@ Analyze an existing codebase and generate speckit artifacts (spec.md, plan.md, t
> Review and refine before using for future development. > Review and refine before using for future development.
## Overview ## Overview
[Inferred from README, comments, and code structure] [Inferred from README, comments, and code structure]
## Functional Requirements ## Functional Requirements
[Extracted from existing functionality] [Extracted from existing functionality]
## Key Entities ## Key Entities
[From data models, schemas, types] [From data models, schemas, types]
``` ```
5. **Generate plan.md** (reverse-engineered): 5. **Generate plan.md** (reverse-engineered):
```markdown ```markdown
# [Feature Name] - Technical Plan (Migrated) # [Feature Name] - Technical Plan (Migrated)
## Current Architecture ## Current Architecture
[Documented from codebase analysis] [Documented from codebase analysis]
## Technology Stack ## Technology Stack
[From package.json, imports, configs] [From package.json, imports, configs]
## Component Map ## Component Map
[Directory → responsibility mapping] [Directory → responsibility mapping]
``` ```
6. **Generate tasks.md** (completion status): 6. **Generate tasks.md** (completion status):
```markdown ```markdown
# [Feature Name] - Tasks (Migrated) # [Feature Name] - Tasks (Migrated)
@@ -95,12 +85,10 @@ Analyze an existing codebase and generate speckit artifacts (spec.md, plan.md, t
Tasks marked [ ] are inferred gaps or TODOs found in code. Tasks marked [ ] are inferred gaps or TODOs found in code.
## Existing Implementation ## Existing Implementation
- [x] [Component A] - Implemented in `src/componentA/` - [x] [Component A] - Implemented in `src/componentA/`
- [x] [Component B] - Implemented in `src/componentB/` - [x] [Component B] - Implemented in `src/componentB/`
## Identified Gaps ## Identified Gaps
- [ ] [Missing tests for X] - [ ] [Missing tests for X]
- [ ] [TODO comment at Y] - [ ] [TODO comment at Y]
``` ```
@@ -116,15 +104,3 @@ Analyze an existing codebase and generate speckit artifacts (spec.md, plan.md, t
- **Preserve Intent**: Use code comments and naming to understand purpose - **Preserve Intent**: Use code comments and naming to understand purpose
- **Flag TODOs**: Any TODO/FIXME/HACK in code becomes an open task - **Flag TODOs**: Any TODO/FIXME/HACK in code becomes an open task
- **Be Conservative**: When unsure, ask rather than assume - **Be Conservative**: When unsure, ask rather than assume
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+4 -16
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-plan name: speckit-plan
description: Execute the implementation planning workflow using the plan template to generate design artifacts. description: Execute the implementation planning workflow using the plan template to generate design artifacts.
version: 1.8.9 version: 1.0.0
depends-on: depends-on:
- speckit-specify - speckit-specify
handoffs: handoffs:
@@ -32,7 +32,7 @@ You are the **Antigravity System Architect**. Your role is to bridge the gap bet
1. **Setup**: Run `../scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot"). 1. **Setup**: Run `../scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load context**: Read FEATURE_SPEC and `AGENTS.md`. Load IMPL_PLAN template from `templates/plan-template.md`. 2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template from `templates/plan-template.md`.
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to: 3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION") - Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
@@ -85,27 +85,15 @@ You are the **Antigravity System Architect**. Your role is to bridge the gap bet
- Output OpenAPI/GraphQL schema to `/contracts/` - Output OpenAPI/GraphQL schema to `/contracts/`
3. **Agent context update**: 3. **Agent context update**:
- Run `../scripts/bash/update-agent-context.sh windsurf` - Run `../scripts/bash/update-agent-context.sh gemini`
- These scripts detect which AI agent is in use - These scripts detect which AI agent is in use
- Update the appropriate agent-specific context file - Update the appropriate agent-specific context file
- Add only new technology from current plan - Add only new technology from current plan
- Preserve manual additions between markers - Preserve manual additions between markers
**Output**: data-model.md, /contracts/\*, quickstart.md, agent-specific file **Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
## Key rules ## Key rules
- Use absolute paths - Use absolute paths
- ERROR on gate failures or unresolved clarifications - ERROR on gate failures or unresolved clarifications
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
@@ -3,7 +3,7 @@
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link] **Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md` **Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
**Note**: This template is filled in by the `/speckit-plan` command. See `.agents/skills/plan.md` for the execution workflow. **Note**: This template is filled in by the `/speckit-plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
## Summary ## Summary
@@ -29,7 +29,7 @@
## Constitution Check ## Constitution Check
_GATE: Must pass before Phase 0 research. Re-check after Phase 1 design._ *GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
[Gates determined based on constitution file] [Gates determined based on constitution file]
@@ -48,7 +48,6 @@ specs/[###-feature]/
``` ```
### Source Code (repository root) ### Source Code (repository root)
<!-- <!--
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
for this feature. Delete unused options and expand the chosen structure with for this feature. Delete unused options and expand the chosen structure with
@@ -100,6 +99,6 @@ directories captured above]
> **Fill ONLY if Constitution Check has violations that must be justified** > **Fill ONLY if Constitution Check has violations that must be justified**
| Violation | Why Needed | Simpler Alternative Rejected Because | | Violation | Why Needed | Simpler Alternative Rejected Because |
| -------------------------- | ------------------ | ------------------------------------ | |-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] | | [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] | | [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
+1 -13
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-quizme name: speckit-quizme
description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues. description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues.
version: 1.8.9 version: 1.0.0
handoffs: handoffs:
- label: Clarify Spec Requirements - label: Clarify Spec Requirements
agent: speckit-clarify agent: speckit-clarify
@@ -65,15 +65,3 @@ Execution steps:
- **Be a Skeptic**: Don't assume the happy path works. - **Be a Skeptic**: Don't assume the happy path works.
- **Focus on "When" and "If"**: When high load, If network drops, When concurrent edits. - **Focus on "When" and "If"**: When high load, If network drops, When concurrent edits.
- **Don't be annoying**: Focus on _critical_ flaws, not nitpicks. - **Don't be annoying**: Focus on _critical_ flaws, not nitpicks.
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+7 -27
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-reviewer name: speckit-reviewer
description: Perform code review with actionable feedback and suggestions. description: Perform code review with actionable feedback and suggestions.
version: 1.8.9 version: 1.0.0
depends-on: [] depends-on: []
--- ---
@@ -45,7 +45,7 @@ Review code changes and provide structured feedback with severity levels.
3. **Review Categories**: 3. **Review Categories**:
| Category | What to Check | | Category | What to Check |
| ------------------- | -------------------------------------------- | |----------|--------------|
| **Correctness** | Logic errors, off-by-one, null handling | | **Correctness** | Logic errors, off-by-one, null handling |
| **Security** | SQL injection, XSS, secrets in code | | **Security** | SQL injection, XSS, secrets in code |
| **Performance** | N+1 queries, unnecessary loops, memory leaks | | **Performance** | N+1 queries, unnecessary loops, memory leaks |
@@ -65,7 +65,7 @@ Review code changes and provide structured feedback with severity levels.
5. **Severity Levels**: 5. **Severity Levels**:
| Level | Meaning | Block Merge? | | Level | Meaning | Block Merge? |
| ------------- | ------------------------------ | ------------ | |-------|---------|--------------|
| 🔴 CRITICAL | Security issue, data loss risk | Yes | | 🔴 CRITICAL | Security issue, data loss risk | Yes |
| 🟠 HIGH | Bug, logic error | Yes | | 🟠 HIGH | Bug, logic error | Yes |
| 🟡 MEDIUM | Code smell, maintainability | Maybe | | 🟡 MEDIUM | Code smell, maintainability | Maybe |
@@ -73,8 +73,7 @@ Review code changes and provide structured feedback with severity levels.
| 💡 SUGGESTION | Nice-to-have, optional | No | | 💡 SUGGESTION | Nice-to-have, optional | No |
6. **Generate Review Report**: 6. **Generate Review Report**:
```markdown
````markdown
# Code Review Report # Code Review Report
**Date**: [timestamp] **Date**: [timestamp]
@@ -84,7 +83,7 @@ Review code changes and provide structured feedback with severity levels.
## Summary ## Summary
| Severity | Count | | Severity | Count |
| -------------- | ----- | |----------|-------|
| 🔴 Critical | X | | 🔴 Critical | X |
| 🟠 High | X | | 🟠 High | X |
| 🟡 Medium | X | | 🟡 Medium | X |
@@ -94,41 +93,34 @@ Review code changes and provide structured feedback with severity levels.
## Findings ## Findings
### 🔴 CRITICAL: SQL Injection Risk ### 🔴 CRITICAL: SQL Injection Risk
**File**: `src/db/queries.ts:45` **File**: `src/db/queries.ts:45`
**Code**: **Code**:
```typescript ```typescript
const query = `SELECT * FROM users WHERE id = ${userId}`; const query = `SELECT * FROM users WHERE id = ${userId}`;
``` ```
````
**Issue**: User input directly concatenated into SQL query **Issue**: User input directly concatenated into SQL query
**Fix**: Use parameterized queries: **Fix**: Use parameterized queries:
```typescript ```typescript
const query = 'SELECT * FROM users WHERE id = $1'; const query = 'SELECT * FROM users WHERE id = $1';
await db.query(query, [userId]); await db.query(query, [userId]);
``` ```
### 🟡 MEDIUM: Complex Function ### 🟡 MEDIUM: Complex Function
**File**: `src/auth/handler.ts:120` **File**: `src/auth/handler.ts:120`
**Issue**: Function has cyclomatic complexity of 15 **Issue**: Function has cyclomatic complexity of 15
**Suggestion**: Extract into smaller functions **Suggestion**: Extract into smaller functions
## What's Good ## What's Good
- Clear naming conventions - Clear naming conventions
- Good test coverage - Good test coverage
- Proper TypeScript types - Proper TypeScript types
## Recommended Actions ## Recommended Actions
1. **Must fix before merge**: [critical/high items] 1. **Must fix before merge**: [critical/high items]
2. **Should address**: [medium items] 2. **Should address**: [medium items]
3. **Consider for later**: [low/suggestions] 3. **Consider for later**: [low/suggestions]
```
``` ```
7. **Output**: 7. **Output**:
@@ -142,15 +134,3 @@ Review code changes and provide structured feedback with severity levels.
- **Be Balanced**: Mention what's good, not just what's wrong - **Be Balanced**: Mention what's good, not just what's wrong
- **Prioritize**: Focus on real issues, not style nitpicks - **Prioritize**: Focus on real issues, not style nitpicks
- **Be Educational**: Explain WHY something is an issue - **Be Educational**: Explain WHY something is an issue
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+8 -20
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-security-audit name: speckit-security-audit
description: Perform a security-focused audit of the codebase against OWASP Top 10, CASL authorization, and LCBP3-DMS security requirements. description: Perform a security-focused audit of the codebase against OWASP Top 10, CASL authorization, and LCBP3-DMS security requirements.
version: 1.8.9 version: 1.0.0
depends-on: depends-on:
- speckit-checker - speckit-checker
--- ---
@@ -12,16 +12,16 @@ You are the **Antigravity Security Sentinel**. Your mission is to identify secur
## Task ## Task
Perform a comprehensive security audit covering OWASP Top 10, CASL permission enforcement, file upload safety, and project-specific security rules defined in `specs/06-Decision-Records/ADR-016-security-authentication.md`. Perform a comprehensive security audit covering OWASP Top 10, CASL permission enforcement, file upload safety, and project-specific security rules defined in `specs/06-Decision-Records/ADR-016-security.md`.
## Context Loading ## Context Loading
Before auditing, load the security context: Before auditing, load the security context:
1. Read `specs/06-Decision-Records/ADR-016-security-authentication.md` for project security decisions 1. Read `specs/06-Decision-Records/ADR-016-security.md` for project security decisions
2. Read `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` for backend security patterns 2. Read `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` for backend security patterns
3. Read `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql` for CASL permission definitions 3. Read `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` for CASL permission definitions
4. Read `AGENTS.md` for security rules (Section: Security Rules Non-Negotiable + Security & Integrity Audit Protocol) 4. Read `GEMINI.md` for security rules (Section: Security & Integrity Rules)
## Execution Steps ## Execution Steps
@@ -44,7 +44,7 @@ Scan the `backend/src/` directory for each OWASP category:
### Phase 2: CASL Authorization Audit ### Phase 2: CASL Authorization Audit
1. **Load permission matrix** from `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql` 1. **Load permission matrix** from `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql`
2. **Scan all controllers** for `@UseGuards(CaslAbilityGuard)` coverage: 2. **Scan all controllers** for `@UseGuards(CaslAbilityGuard)` coverage:
```bash ```bash
@@ -132,7 +132,7 @@ Check LCBP3-DMS-specific file handling per ADR-016:
## Severity Classification ## Severity Classification
| Severity | Description | Response | | Severity | Description | Response |
| --------------- | ----------------------------------------------------- | ----------------------- | | -------------- | ----------------------------------------------------- | ----------------------- |
| 🔴 **Critical** | Exploitable vulnerability, data exposure, auth bypass | Immediate fix required | | 🔴 **Critical** | Exploitable vulnerability, data exposure, auth bypass | Immediate fix required |
| 🟠 **High** | Missing security control, potential escalation path | Fix before next release | | 🟠 **High** | Missing security control, potential escalation path | Fix before next release |
| 🟡 **Medium** | Best practice violation, defense-in-depth gap | Plan fix in sprint | | 🟡 **Medium** | Best practice violation, defense-in-depth gap | Plan fix in sprint |
@@ -152,7 +152,7 @@ Generate a structured report:
## Summary ## Summary
| Severity | Count | | Severity | Count |
| ----------- | ----- | | ---------- | ----- |
| 🔴 Critical | X | | 🔴 Critical | X |
| 🟠 High | X | | 🟠 High | X |
| 🟡 Medium | X | | 🟡 Medium | X |
@@ -197,15 +197,3 @@ Generate a structured report:
- **No False Confidence**: If a check is inconclusive, mark it as "⚠️ Needs Manual Review" rather than passing. - **No False Confidence**: If a check is inconclusive, mark it as "⚠️ Needs Manual Review" rather than passing.
- **LCBP3-Specific**: Prioritize project-specific rules (idempotency, ClamAV, Redlock) over generic checks. - **LCBP3-Specific**: Prioritize project-specific rules (idempotency, ClamAV, Redlock) over generic checks.
- **Frontend Too**: If scope includes frontend, also check for XSS in React components, unescaped user data, and exposed API keys. - **Frontend Too**: If scope includes frontend, also check for XSS in React components, unescaped user data, and exposed API keys.
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+3 -15
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-specify name: speckit-specify
description: Create or update the feature specification from a natural language feature description. description: Create or update the feature specification from a natural language feature description.
version: 1.8.9 version: 1.0.0
handoffs: handoffs:
- label: Build Technical Plan - label: Build Technical Plan
agent: speckit-plan agent: speckit-plan
@@ -64,8 +64,8 @@ Given that feature description, do this:
d. Run the script `../scripts/bash/create-new-feature.sh --json "{{args}}"` with the calculated number and short-name: d. Run the script `../scripts/bash/create-new-feature.sh --json "{{args}}"` with the calculated number and short-name:
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description - Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
- Bash example: `.agents/scripts/bash/create-new-feature.sh --json "{{args}}" --number 5 --short-name "user-auth" "Add user authentication"` - Bash example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" --json --number 5 --short-name "user-auth" "Add user authentication"`
- PowerShell example: `.agents/scripts/powershell/create-new-feature.ps1 -Json -Args '{{args}}' -Number 5 -ShortName "user-auth" "Add user authentication"` - PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
**IMPORTANT**: **IMPORTANT**:
- Check all three sources (remote branches, local branches, specs directories) to find the highest number - Check all three sources (remote branches, local branches, specs directories) to find the highest number
@@ -262,15 +262,3 @@ Success criteria must be:
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric) - "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
- "React components render efficiently" (framework-specific) - "React components render efficiently" (framework-specific)
- "Redis cache hit rate above 80%" (technology-specific) - "Redis cache hit rate above 80%" (technology-specific)
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../\_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
@@ -5,7 +5,7 @@
**Status**: Draft **Status**: Draft
**Input**: User description: "$ARGUMENTS" **Input**: User description: "$ARGUMENTS"
## User Scenarios & Testing _(mandatory)_ ## User Scenarios & Testing *(mandatory)*
<!-- <!--
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance. IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
@@ -75,7 +75,7 @@
- What happens when [boundary condition]? - What happens when [boundary condition]?
- How does system handle [error scenario]? - How does system handle [error scenario]?
## Requirements _(mandatory)_ ## Requirements *(mandatory)*
<!-- <!--
ACTION REQUIRED: The content in this section represents placeholders. ACTION REQUIRED: The content in this section represents placeholders.
@@ -90,17 +90,17 @@
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"] - **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
- **FR-005**: System MUST [behavior, e.g., "log all security events"] - **FR-005**: System MUST [behavior, e.g., "log all security events"]
_Example of marking unclear requirements:_ *Example of marking unclear requirements:*
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?] - **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified] - **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
### Key Entities _(include if feature involves data)_ ### Key Entities *(include if feature involves data)*
- **[Entity 1]**: [What it represents, key attributes without implementation] - **[Entity 1]**: [What it represents, key attributes without implementation]
- **[Entity 2]**: [What it represents, relationships to other entities] - **[Entity 2]**: [What it represents, relationships to other entities]
## Success Criteria _(mandatory)_ ## Success Criteria *(mandatory)*
<!-- <!--
ACTION REQUIRED: Define measurable success criteria. ACTION REQUIRED: Define measurable success criteria.
+5 -21
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-status name: speckit-status
description: Display a dashboard showing feature status, completion percentage, and blockers. description: Display a dashboard showing feature status, completion percentage, and blockers.
version: 1.8.9 version: 1.0.0
depends-on: [] depends-on: []
--- ---
@@ -26,7 +26,6 @@ Generate a dashboard view of all features and their completion status.
### Execution Steps ### Execution Steps
1. **Discover Features**: 1. **Discover Features**:
```bash ```bash
# Find all feature directories # Find all feature directories
find .specify/features -maxdepth 1 -type d 2>/dev/null || echo "No features found" find .specify/features -maxdepth 1 -type d 2>/dev/null || echo "No features found"
@@ -35,14 +34,13 @@ Generate a dashboard view of all features and their completion status.
2. **For Each Feature, Gather Metrics**: 2. **For Each Feature, Gather Metrics**:
| Artifact | Check | Metric | | Artifact | Check | Metric |
| ---------------- | ------------------ | -------------------------- | |----------|-------|--------|
| spec.md | Exists? | Has [NEEDS CLARIFICATION]? | | spec.md | Exists? | Has [NEEDS CLARIFICATION]? |
| plan.md | Exists? | All sections complete? | | plan.md | Exists? | All sections complete? |
| tasks.md | Exists? | Count [x] vs [ ] vs [/] | | tasks.md | Exists? | Count [x] vs [ ] vs [/] |
| checklists/\*.md | All items checked? | Checklist completion % | | checklists/*.md | All items checked? | Checklist completion % |
3. **Calculate Completion**: 3. **Calculate Completion**:
``` ```
Phase 1 (Specify): spec.md exists & no clarifications needed Phase 1 (Specify): spec.md exists & no clarifications needed
Phase 2 (Plan): plan.md exists & complete Phase 2 (Plan): plan.md exists & complete
@@ -58,7 +56,6 @@ Generate a dashboard view of all features and their completion status.
- Missing dependencies - Missing dependencies
5. **Generate Dashboard**: 5. **Generate Dashboard**:
```markdown ```markdown
# Speckit Status Dashboard # Speckit Status Dashboard
@@ -68,19 +65,18 @@ Generate a dashboard view of all features and their completion status.
## Overview ## Overview
| Feature | Phase | Progress | Blockers | Next Action | | Feature | Phase | Progress | Blockers | Next Action |
| ------------ | --------- | -------- | -------- | ------------------------ | |---------|-------|----------|----------|-------------|
| auth-system | Implement | 75% | 0 | Complete remaining tasks | | auth-system | Implement | 75% | 0 | Complete remaining tasks |
| payment-flow | Plan | 40% | 2 | Resolve clarifications | | payment-flow | Plan | 40% | 2 | Resolve clarifications |
## Feature Details ## Feature Details
### [Feature Name] ### [Feature Name]
```
```
Spec: ████████░░ 80% Spec: ████████░░ 80%
Plan: ██████████ 100% Plan: ██████████ 100%
Tasks: ██████░░░░ 60% Tasks: ██████░░░░ 60%
``` ```
**Blockers**: **Blockers**:
@@ -109,15 +105,3 @@ Generate a dashboard view of all features and their completion status.
- **Be Visual**: Use progress bars and tables - **Be Visual**: Use progress bars and tables
- **Be Actionable**: Every status should have a "next action" - **Be Actionable**: Every status should have a "next action"
- **Be Fast**: Cache nothing, always recalculate - **Be Fast**: Cache nothing, always recalculate
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+1 -13
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-tasks name: speckit-tasks
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts. description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
version: 1.8.9 version: 1.0.0
depends-on: depends-on:
- speckit-plan - speckit-plan
handoffs: handoffs:
@@ -145,15 +145,3 @@ Every task MUST strictly follow this format:
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration - Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
- Each phase should be a complete, independently testable increment - Each phase should be a complete, independently testable increment
- **Final Phase**: Polish & Cross-Cutting Concerns - **Final Phase**: Polish & Cross-Cutting Concerns
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
@@ -1,5 +1,6 @@
--- ---
description: 'Task list template for feature implementation'
description: "Task list template for feature implementation"
--- ---
# Tasks: [FEATURE NAME] # Tasks: [FEATURE NAME]
@@ -82,8 +83,8 @@ Examples of foundational tasks (adjust based on your project):
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation** > **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test\_[name].py - [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test\_[name].py - [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 1 ### Implementation for User Story 1
@@ -106,8 +107,8 @@ Examples of foundational tasks (adjust based on your project):
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️ ### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test\_[name].py - [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test\_[name].py - [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 2 ### Implementation for User Story 2
@@ -128,8 +129,8 @@ Examples of foundational tasks (adjust based on your project):
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️ ### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test\_[name].py - [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test\_[name].py - [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 3 ### Implementation for User Story 3
+1 -13
View File
@@ -1,7 +1,7 @@
--- ---
name: speckit-taskstoissues name: speckit-taskstoissues
description: Convert existing tasks into actionable, dependency-ordered issues for the feature based on available design artifacts. description: Convert existing tasks into actionable, dependency-ordered issues for the feature based on available design artifacts.
version: 1.8.9 version: 1.1.0
depends-on: depends-on:
- speckit-tasks - speckit-tasks
tools: ['github/github-mcp-server/issue_write'] tools: ['github/github-mcp-server/issue_write']
@@ -204,15 +204,3 @@ Convert all tasks from `tasks.md` into well-structured issues on the appropriate
- **Label Consistency**: Use a consistent label taxonomy across all issues - **Label Consistency**: Use a consistent label taxonomy across all issues
- **Platform Safety**: Never create issues on repos that don't match the git remote - **Platform Safety**: Never create issues on repos that don't match the git remote
- **Dry Run Support**: Always support `--dry-run` to preview before creating - **Dry Run Support**: Always support `--dry-run` to preview before creating
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist

Some files were not shown because too many files have changed in this diff Show More