Compare commits
28 Commits
02400fd88c
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 83b6620093 | |||
| a57fef4d44 | |||
| 9384581aee | |||
| 3143dd7263 | |||
| cf78e14709 | |||
| 72f28184ff | |||
| 486aca08a8 | |||
| 1549098eac | |||
| 486bf3b9a4 | |||
| e2753e4eac | |||
| 2e89761b0f | |||
| 13745e5874 | |||
| 733f3c3987 | |||
| c894c08fb8 | |||
| 657698558b | |||
| 844caf477d | |||
| feb1319fb3 | |||
| d422b040d9 | |||
| 29a6509c58 | |||
| 8b658e8530 | |||
| 0b7dd466ec | |||
| e5db7511c6 | |||
| b7d637642a | |||
| 5e4e0444ed | |||
| d7e48448e0 | |||
| 5977e48e38 | |||
| 3a5fc8d4af | |||
| 6d45bdaeb5 |
@@ -1,60 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
---
|
|
||||||
|
|
||||||
# NAP-DMS Project Context
|
|
||||||
|
|
||||||
## Role & Persona
|
|
||||||
|
|
||||||
Act as a **Senior Full Stack Developer** specialized in:
|
|
||||||
|
|
||||||
- NestJS, Next.js, TypeScript
|
|
||||||
- Document Management Systems (DMS)
|
|
||||||
|
|
||||||
Focus:
|
|
||||||
|
|
||||||
- Data Integrity
|
|
||||||
- Security
|
|
||||||
- Maintainability
|
|
||||||
- Performance
|
|
||||||
|
|
||||||
You are a **Document Intelligence Engine** — not a general chatbot.
|
|
||||||
Every response must be **precise**, **spec-compliant**, and **production-ready**.
|
|
||||||
|
|
||||||
## Project Information
|
|
||||||
|
|
||||||
- **Project:** NAP-DMS (LCBP3)
|
|
||||||
- **Version:** 1.8.5
|
|
||||||
- **Stack:** NestJS + Next.js + TypeScript + MariaDB + Ollama (AI)
|
|
||||||
- **Repo:** https://git.np-dms.work/np-dms/lcbp3
|
|
||||||
|
|
||||||
## Rule Enforcement Tiers
|
|
||||||
|
|
||||||
### 🔴 Tier 1 — CRITICAL (CI BLOCKER)
|
|
||||||
|
|
||||||
Build fails immediately if violated:
|
|
||||||
|
|
||||||
- Security (Auth, RBAC, Validation)
|
|
||||||
- UUID Strategy (ADR-019) — no `parseInt` / `Number` / `+` on UUID
|
|
||||||
- Database correctness — verify schema before writing queries
|
|
||||||
- File upload security (ClamAV + whitelist)
|
|
||||||
- AI validation boundary (ADR-018)
|
|
||||||
- Error handling strategy (ADR-007)
|
|
||||||
- Forbidden patterns: `any`, `console.log`, UUID misuse
|
|
||||||
|
|
||||||
### 🟡 Tier 2 — IMPORTANT (CODE REVIEW)
|
|
||||||
|
|
||||||
Must fix before merge:
|
|
||||||
|
|
||||||
- Architecture patterns (thin controller, business logic in service)
|
|
||||||
- Test coverage (80%+ business logic, 70%+ backend overall)
|
|
||||||
- Cache invalidation
|
|
||||||
- Naming conventions
|
|
||||||
|
|
||||||
### 🟢 Tier 3 — GUIDELINES
|
|
||||||
|
|
||||||
Best practice — follow when possible:
|
|
||||||
|
|
||||||
- Code style / formatting (Prettier handles)
|
|
||||||
- Comment completeness
|
|
||||||
- Minor optimizations
|
|
||||||
@@ -1,71 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
---
|
|
||||||
|
|
||||||
# ADR-019 UUID Strategy
|
|
||||||
|
|
||||||
## CRITICAL RULES
|
|
||||||
|
|
||||||
- **NEVER** use `parseInt()` on UUID values
|
|
||||||
- **NEVER** use `Number()` on UUID values
|
|
||||||
- **NEVER** use `+` operator on UUID values
|
|
||||||
- **ALWAYS** use `publicId` (string UUID) for API responses
|
|
||||||
- **NEVER** expose internal INT `id` in API responses (use `@Exclude()`)
|
|
||||||
|
|
||||||
## Identifier Types
|
|
||||||
|
|
||||||
| Context | Type | Notes |
|
|
||||||
| ---------------- | ------------------------- | ------------------------------------------- |
|
|
||||||
| Internal / DB FK | `INT AUTO_INCREMENT` | Never exposed in API |
|
|
||||||
| Public API / URL | `UUIDv7` (MariaDB native) | Stored as BINARY(16), no transformer needed |
|
|
||||||
| Entity Property | `publicId: string` | Exposed directly in API (no transformation) |
|
|
||||||
| API Response | `publicId: string` (UUID) | INT `id` has `@Exclude()` — never appears |
|
|
||||||
|
|
||||||
## Backend Pattern (NestJS/TypeORM)
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Entity
|
|
||||||
@Entity()
|
|
||||||
class Project extends UuidBaseEntity {
|
|
||||||
@Column({ type: 'uuid' })
|
|
||||||
publicId: string; // UUID string, no transformation needed
|
|
||||||
|
|
||||||
@PrimaryKey()
|
|
||||||
@Exclude()
|
|
||||||
id: number; // Internal INT, never exposed
|
|
||||||
}
|
|
||||||
|
|
||||||
// API Response → { id: "019505a1-7c3e-7000-8000-abc123def456" }
|
|
||||||
// Uses publicId directly, no @Expose({ name: 'id' }) needed
|
|
||||||
```
|
|
||||||
|
|
||||||
## Frontend Pattern (Next.js)
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ✅ CORRECT — Use publicId only
|
|
||||||
type ProjectOption = {
|
|
||||||
publicId?: string; // No uuid, no id fallback
|
|
||||||
projectName?: string;
|
|
||||||
};
|
|
||||||
|
|
||||||
// ❌ WRONG — Multiple identifiers cause confusion
|
|
||||||
type ProjectOption = {
|
|
||||||
publicId?: string;
|
|
||||||
uuid?: string; // Don't do this
|
|
||||||
id?: number; // Don't do this
|
|
||||||
};
|
|
||||||
|
|
||||||
// ❌ NEVER use parseInt on UUID
|
|
||||||
parseInt(projectId); // "0195..." → 19 (WRONG!)
|
|
||||||
|
|
||||||
// ❌ NEVER use id ?? '' fallback
|
|
||||||
const value = c.publicId ?? c.id ?? ''; // Wrong!
|
|
||||||
|
|
||||||
// ✅ CORRECT — Use publicId only
|
|
||||||
const value = c.publicId; // "019505a1-7c3e-7000-8000-abc123def456"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Related Documents
|
|
||||||
|
|
||||||
- `specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md`
|
|
||||||
- `specs/05-Engineering-Guidelines/05-07-hybrid-uuid-implementation-plan.md`
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
---
|
|
||||||
|
|
||||||
# Security Rules (Non-Negotiable)
|
|
||||||
|
|
||||||
## Mandatory Security Requirements
|
|
||||||
|
|
||||||
1. **Idempotency:** All critical `POST`/`PUT`/`PATCH` MUST validate `Idempotency-Key` header
|
|
||||||
2. **Two-Phase File Upload:** Upload → Temp → Commit → Permanent
|
|
||||||
3. **Race Conditions:** Redis Redlock + TypeORM `@VersionColumn` for Document Numbering
|
|
||||||
4. **Validation:** Zod (frontend) + class-validator (backend DTO)
|
|
||||||
5. **Password:** bcrypt 12 salt rounds, min 8 chars, rotate every 90 days
|
|
||||||
6. **Rate Limiting:** `ThrottlerGuard` on all auth endpoints
|
|
||||||
7. **File Upload:** Whitelist PDF/DWG/DOCX/XLSX/ZIP, max 50MB, ClamAV scan
|
|
||||||
8. **AI Isolation (ADR-018):** Ollama on Admin Desktop ONLY — NO direct DB/storage access
|
|
||||||
9. **Error Handling (ADR-007):** Use layered error classification with user-friendly messages
|
|
||||||
10. **AI Integration (ADR-020):** RFA-First approach with unified pipeline architecture
|
|
||||||
11. **AI Audit Trail:** Log all AI interactions and human validations
|
|
||||||
12. **Rate Limiting:** Apply to AI endpoints to prevent abuse
|
|
||||||
|
|
||||||
## Full Documentation
|
|
||||||
|
|
||||||
`specs/06-Decision-Records/ADR-016-security-authentication.md`
|
|
||||||
|
|
||||||
## Security Checklist (Before Every Commit)
|
|
||||||
|
|
||||||
- [ ] Input validation implemented (Zod/class-validator)
|
|
||||||
- [ ] RBAC/CASL permissions checked
|
|
||||||
- [ ] No SQL injection vulnerabilities
|
|
||||||
- [ ] File upload validation (whitelist + ClamAV)
|
|
||||||
- [ ] Rate limiting applied to auth endpoints
|
|
||||||
- [ ] AI boundary enforcement (ADR-018) - no direct DB/storage access
|
|
||||||
- [ ] AI audit logging implemented for AI interactions
|
|
||||||
- [ ] Error handling follows ADR-007 layered classification
|
|
||||||
- [ ] OWASP Top 10 review passed
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
---
|
|
||||||
|
|
||||||
# TypeScript Rules
|
|
||||||
|
|
||||||
## Strict Requirements
|
|
||||||
|
|
||||||
- **Strict Mode** — all strict checks enforced
|
|
||||||
- **ZERO `any` types** — use proper types or `unknown` + narrowing
|
|
||||||
- **ZERO `console.log`** — NestJS `Logger` (backend); remove before commit (frontend)
|
|
||||||
|
|
||||||
## Comment Language Policy
|
|
||||||
|
|
||||||
- **Comments:** Thai (เข้าใจง่ายสำหรับทีมไทย)
|
|
||||||
- **Code Identifiers:** English (variables, functions, classes)
|
|
||||||
|
|
||||||
## Error Handling Pattern
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Backend (NestJS)
|
|
||||||
import { Logger } from '@nestjs/common';
|
|
||||||
const logger = new Logger('ServiceName');
|
|
||||||
|
|
||||||
// Use logger instead of console.log
|
|
||||||
logger.error('Error message', error.stack);
|
|
||||||
throw new HttpException('Message', HttpStatus.BAD_REQUEST);
|
|
||||||
|
|
||||||
// Frontend (Next.js)
|
|
||||||
// Remove all console.log before commit
|
|
||||||
// Use proper error boundaries and toast notifications
|
|
||||||
```
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
---
|
|
||||||
|
|
||||||
# Domain Terminology
|
|
||||||
|
|
||||||
## DMS Glossary
|
|
||||||
|
|
||||||
| ✅ Use | ❌ Don't Use |
|
|
||||||
| ------------------ | ------------------------------------- |
|
|
||||||
| Correspondence | Letter, Communication, Document |
|
|
||||||
| RFA | Approval Request, Submit for Approval |
|
|
||||||
| Transmittal | Delivery Note, Cover Letter |
|
|
||||||
| Circulation | Distribution, Routing |
|
|
||||||
| Shop Drawing | Construction Drawing |
|
|
||||||
| Contract Drawing | Design Drawing, Blueprint |
|
|
||||||
| Workflow Engine | Approval Flow, Process Engine |
|
|
||||||
| Document Numbering | Document ID, Auto Number |
|
|
||||||
| RBAC | Permission System (generic) |
|
|
||||||
|
|
||||||
## Full Glossary
|
|
||||||
|
|
||||||
`specs/00-overview/00-02-glossary.md`
|
|
||||||
|
|
||||||
## Key Spec Files Priority
|
|
||||||
|
|
||||||
Spec priority: **`06-Decision-Records`** > **`05-Engineering-Guidelines`** > others
|
|
||||||
|
|
||||||
| Document | Path | Use When |
|
|
||||||
| ----------------------- | ----------------------------------------------------------------- | ------------------------------- |
|
|
||||||
| **Glossary** | `specs/00-overview/00-02-glossary.md` | Verify domain terminology |
|
|
||||||
| **Schema Tables** | `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` | Before writing any query |
|
|
||||||
| **Data Dictionary** | `specs/03-Data-and-Storage/03-01-data-dictionary.md` | Field meanings + business rules |
|
|
||||||
| **Edge Cases** | `specs/01-Requirements/01-06-edge-cases-and-rules.md` | Prevent bugs in flows |
|
|
||||||
| **ADR-019 UUID** | `specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md` | UUID-related work |
|
|
||||||
| **Backend Guidelines** | `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` | NestJS patterns |
|
|
||||||
| **Frontend Guidelines** | `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md` | Next.js patterns |
|
|
||||||
| **Testing Strategy** | `specs/05-Engineering-Guidelines/05-04-testing-strategy.md` | Coverage goals |
|
|
||||||
@@ -1,41 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
---
|
|
||||||
|
|
||||||
# Forbidden Actions
|
|
||||||
|
|
||||||
## ❌ Never Do This
|
|
||||||
|
|
||||||
| ❌ Forbidden | ✅ Correct Approach |
|
|
||||||
| ----------------------------------------------- | ----------------------------------------------- |
|
|
||||||
| SQL Triggers for business logic | NestJS Service methods |
|
|
||||||
| `.env` files in production | `docker-compose.yml` environment section |
|
|
||||||
| TypeORM migration files | Edit schema SQL directly (ADR-009) |
|
|
||||||
| Inventing table/column names | Verify against `schema-02-tables.sql` |
|
|
||||||
| `any` TypeScript type | Proper types / generics |
|
|
||||||
| `console.log` in committed code | NestJS Logger (backend) / remove (frontend) |
|
|
||||||
| `req: any` in controllers | `RequestWithUser` typed interface |
|
|
||||||
| `parseInt()` on UUID values | Use UUID string directly (ADR-019) |
|
|
||||||
| Exposing INT PK in API responses | UUIDv7 (ADR-019) |
|
|
||||||
| AI accessing DB/storage directly | AI → DMS API → DB (ADR-018) |
|
|
||||||
| Direct file operations bypassing StorageService | `StorageService` for all file moves |
|
|
||||||
| Inline email/notification sending | BullMQ queue job |
|
|
||||||
| Deploying without Release Gates | Complete `04-08-release-management-policy.md` |
|
|
||||||
| AI direct cloud API calls | On-premises Ollama only (ADR-018) |
|
|
||||||
| AI outputs without human validation | Human-in-the-loop validation required (ADR-020) |
|
|
||||||
|
|
||||||
## Schema Changes (ADR-009)
|
|
||||||
|
|
||||||
- **NO TypeORM migrations** — edit SQL schema directly
|
|
||||||
- Always check `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` before writing queries
|
|
||||||
- Update Data Dictionary when changing fields
|
|
||||||
|
|
||||||
## UUID Handling
|
|
||||||
|
|
||||||
See `01-adr-019-uuid.md` for complete UUID rules.
|
|
||||||
|
|
||||||
Quick reminder:
|
|
||||||
|
|
||||||
- ❌ `parseInt(uuid)` → NEVER
|
|
||||||
- ❌ `Number(uuid)` → NEVER
|
|
||||||
- ✅ Use UUID string directly
|
|
||||||
@@ -1,63 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
globs:
|
|
||||||
- "backend/**/*.service.ts"
|
|
||||||
- "backend/**/*.controller.ts"
|
|
||||||
- "backend/**/*.dto.ts"
|
|
||||||
- "backend/**/*.entity.ts"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Backend Patterns (NestJS)
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
- **Thin Controller** — business logic in Service layer
|
|
||||||
- **DTO Validation** — class-validator + class-transformer
|
|
||||||
- **RBAC** — CASL for authorization
|
|
||||||
- **Error Handling** — Logger + HttpException
|
|
||||||
|
|
||||||
## UUID Resolution Pattern
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Controller - accept UUID in DTO
|
|
||||||
@Post()
|
|
||||||
async create(@Body() dto: CreateCorrespondenceDto) {
|
|
||||||
// Resolve UUID to internal ID
|
|
||||||
const contract = await this.contractService.findOneByUuid(dto.contractUuid);
|
|
||||||
const contractId = contract.id; // Internal INT for DB queries
|
|
||||||
|
|
||||||
return this.service.create(dto, contractId);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Service - use internal ID for DB operations
|
|
||||||
async create(dto: CreateCorrespondenceDto, contractId: number) {
|
|
||||||
// Use contractId (INT) for database queries
|
|
||||||
const correspondence = this.repo.create({
|
|
||||||
contractId, // FK is INT
|
|
||||||
// ... other fields
|
|
||||||
});
|
|
||||||
return this.repo.save(correspondence);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## API Response Pattern
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Entity
|
|
||||||
@Entity()
|
|
||||||
class Contract extends UuidBaseEntity {
|
|
||||||
@Column({ type: 'uuid' })
|
|
||||||
publicId: string;
|
|
||||||
|
|
||||||
@PrimaryKey()
|
|
||||||
@Exclude()
|
|
||||||
id: number;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Response automatically includes publicId as 'id'
|
|
||||||
// { id: "019505a1-7c3e-7000-8000-abc123def456", ... }
|
|
||||||
```
|
|
||||||
|
|
||||||
## Full Guidelines
|
|
||||||
|
|
||||||
`specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
|
|
||||||
@@ -1,54 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
globs:
|
|
||||||
- "frontend/**/*.tsx"
|
|
||||||
- "frontend/**/*.ts"
|
|
||||||
- "frontend/**/*.css"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Frontend Patterns (Next.js)
|
|
||||||
|
|
||||||
## Form Handling
|
|
||||||
|
|
||||||
- **RHF** (React Hook Form) for form management
|
|
||||||
- **Zod** for validation schema
|
|
||||||
- **TanStack Query** for server state
|
|
||||||
|
|
||||||
## UUID Handling
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// ✅ CORRECT - Use publicId only
|
|
||||||
interface ProjectOption {
|
|
||||||
publicId?: string;
|
|
||||||
projectName?: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Select options
|
|
||||||
const options = contracts.map(c => ({
|
|
||||||
label: `${c.contractName} (${c.contractCode})`,
|
|
||||||
value: c.publicId!, // Use publicId, no fallback to id
|
|
||||||
}));
|
|
||||||
|
|
||||||
// ❌ WRONG - Never use these patterns
|
|
||||||
const value = c.publicId ?? c.id ?? ''; // Wrong!
|
|
||||||
const id = parseInt(projectId); // Wrong - parseInt on UUID!
|
|
||||||
```
|
|
||||||
|
|
||||||
## API Client Pattern
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Use publicId directly in API calls
|
|
||||||
const contract = await contractService.getById(publicId);
|
|
||||||
|
|
||||||
// Form submission with UUID
|
|
||||||
const onSubmit = async (data: FormData) => {
|
|
||||||
await correspondenceService.create({
|
|
||||||
contractUuid: selectedContract.publicId!, // UUID string
|
|
||||||
// ... other fields
|
|
||||||
});
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Full Guidelines
|
|
||||||
|
|
||||||
`specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`
|
|
||||||
@@ -1,42 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
---
|
|
||||||
|
|
||||||
# Development Flow
|
|
||||||
|
|
||||||
## 🔴 Critical Work — DB / API / Security / Workflow Engine
|
|
||||||
|
|
||||||
**MUST complete all steps:**
|
|
||||||
|
|
||||||
1. **Glossary check** — verify domain terms in `00-02-glossary.md`
|
|
||||||
2. **Read the spec** — select from Key Spec Files table
|
|
||||||
3. **Check schema** — verify table/column in `schema-02-tables.sql`
|
|
||||||
4. **Check data dictionary** — confirm field meanings + business rules
|
|
||||||
5. **Scan edge cases** — `01-06-edge-cases-and-rules.md`
|
|
||||||
6. **Check ADRs** — verify decisions align (ADR-009, ADR-018, ADR-019)
|
|
||||||
7. **Write code** — TypeScript strict, no `any`, no `console.log`
|
|
||||||
|
|
||||||
## 🟡 Normal Work — UI / Feature / Integration
|
|
||||||
|
|
||||||
- Follow existing patterns in codebase
|
|
||||||
- Check spec for relevant module only
|
|
||||||
- No need to read all specs
|
|
||||||
|
|
||||||
## 🟢 Quick Fix — Bug Fix / Typo / Style
|
|
||||||
|
|
||||||
- Fix directly
|
|
||||||
- Add minimal test if logic changed
|
|
||||||
- Check forbidden patterns before commit
|
|
||||||
|
|
||||||
## Context-Aware Triggers
|
|
||||||
|
|
||||||
| Request | Files to Check | Expected Response |
|
|
||||||
| -------------------- | ------------------------------------------------------- | --------------------------------------------------- |
|
|
||||||
| "สร้าง API ใหม่" | `05-02-backend-guidelines.md`, `schema-02-tables.sql` | NestJS Controller + Service + DTO + CASL Guard |
|
|
||||||
| "แก้ฟอร์ม frontend" | `05-03-frontend-guidelines.md`, `01-06-edge-cases.md` | RHF+Zod + TanStack Query + Thai comments |
|
|
||||||
| "เพิ่ม field ใหม่" | `ADR-009`, `data-dictionary.md`, `schema-02-tables.sql` | Edit SQL directly + update Data Dictionary + Entity |
|
|
||||||
| "ตรวจสอบ UUID" | `ADR-019`, `05-07-hybrid-uuid-implementation-plan.md` | UUIDv7 MariaDB native UUID + TransformInterceptor |
|
|
||||||
| "สร้าง migration" | `ADR-009`, `03-06-migration-business-scope.md` | Edit SQL schema directly + n8n workflow |
|
|
||||||
| "ตรวจสอบ permission" | `seed-permissions.sql`, `ADR-016` | CASL 4-Level RBAC matrix |
|
|
||||||
| "deploy production" | `04-08-release-management-policy.md`, `ADR-015` | Release Gates + Blue-Green strategy |
|
|
||||||
| "เพิ่ม test" | `05-04-testing-strategy.md` | Coverage goals + test patterns |
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
---
|
|
||||||
|
|
||||||
# Commit Checklist
|
|
||||||
|
|
||||||
## Pre-Commit Verification
|
|
||||||
|
|
||||||
- [ ] UUID pattern verified (no parseInt on UUID)
|
|
||||||
- [ ] No `any` types in TypeScript
|
|
||||||
- [ ] No `console.log` in committed code
|
|
||||||
- [ ] Comments in Thai
|
|
||||||
- [ ] Code identifiers in English
|
|
||||||
- [ ] Schema changes via SQL directly (not migration)
|
|
||||||
- [ ] Test coverage meets targets (Backend 70%+, Business Logic 80%+)
|
|
||||||
- [ ] Relevant ADRs checked (ADR-009, ADR-018, ADR-019)
|
|
||||||
- [ ] Glossary terms used correctly
|
|
||||||
- [ ] Error handling complete (Logger + HttpException)
|
|
||||||
- [ ] i18n keys used instead of hardcode text
|
|
||||||
- [ ] Cache invalidation when data modified
|
|
||||||
- [ ] Security checklist passed (OWASP Top 10)
|
|
||||||
|
|
||||||
## Commit Message Format
|
|
||||||
|
|
||||||
```
|
|
||||||
type(scope): description
|
|
||||||
|
|
||||||
[optional body]
|
|
||||||
```
|
|
||||||
|
|
||||||
Types: `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- `feat(correspondence): add originator organization validation`
|
|
||||||
- `fix(uuid): correct parseInt usage to string comparison`
|
|
||||||
- `spec(agents): bump to v1.8.5 - refactor structure`
|
|
||||||
@@ -1,78 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
---
|
|
||||||
|
|
||||||
# ADR-007 Error Handling Strategy
|
|
||||||
|
|
||||||
## CRITICAL RULES
|
|
||||||
|
|
||||||
- **ALWAYS** use layered error classification (Validation, Business, System)
|
|
||||||
- **NEVER** expose technical details to end users
|
|
||||||
- **ALWAYS** provide user-friendly error messages with recovery guidance
|
|
||||||
- **ALWAYS** log technical details for debugging
|
|
||||||
- **NEVER** use generic error messages without context
|
|
||||||
|
|
||||||
## Error Classification
|
|
||||||
|
|
||||||
| Error Type | Description | User Message | Technical Log |
|
|
||||||
|------------|-------------|--------------|---------------|
|
|
||||||
| **Validation** | Input validation failures | Clear field-level errors | Full validation details |
|
|
||||||
| **Business** | Business rule violations | Actionable guidance | Business context + user ID |
|
|
||||||
| **System** | Infrastructure failures | Generic "try again" | Full stack trace + metrics |
|
|
||||||
|
|
||||||
## Backend Pattern (NestJS)
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Custom Exception Hierarchy
|
|
||||||
export class BusinessException extends HttpException {
|
|
||||||
constructor(
|
|
||||||
message: string,
|
|
||||||
userMessage: string,
|
|
||||||
recoveryAction?: string,
|
|
||||||
errorCode?: string
|
|
||||||
) {
|
|
||||||
super({ message, userMessage, recoveryAction, errorCode }, 400);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Global Exception Filter
|
|
||||||
@Catch()
|
|
||||||
export class GlobalExceptionFilter implements ExceptionFilter {
|
|
||||||
catch(exception: unknown, host: ArgumentsHost) {
|
|
||||||
// Classify error and provide appropriate response
|
|
||||||
// Log technical details
|
|
||||||
// Return user-friendly message
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Frontend Pattern (Next.js)
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Error Display Component
|
|
||||||
const ErrorDisplay = ({ error, onRetry }) => {
|
|
||||||
const userMessage = error.userMessage || 'เกิดข้อผิดพลาด';
|
|
||||||
const recoveryAction = error.recoveryAction;
|
|
||||||
|
|
||||||
return (
|
|
||||||
<div>
|
|
||||||
<p>{userMessage}</p>
|
|
||||||
{recoveryAction && <p>{recoveryAction}</p>}
|
|
||||||
{onRetry && <button onClick={onRetry}>ลองใหม่</button>}
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Required Implementation
|
|
||||||
|
|
||||||
- [ ] Global Exception Filter with layered classification
|
|
||||||
- [ ] Custom exception hierarchy (Validation, Business, System)
|
|
||||||
- [ ] Standardized error response DTOs
|
|
||||||
- [ ] Frontend error display components
|
|
||||||
- [ ] Error recovery mechanisms where applicable
|
|
||||||
|
|
||||||
## Related Documents
|
|
||||||
|
|
||||||
- `specs/06-Decision-Records/ADR-007-error-handling-strategy.md`
|
|
||||||
- `specs/06-Decision-Records/ADR-010-logging-monitoring-strategy.md`
|
|
||||||
@@ -1,100 +0,0 @@
|
|||||||
---
|
|
||||||
trigger: always_on
|
|
||||||
---
|
|
||||||
|
|
||||||
# ADR-020 AI Integration Architecture
|
|
||||||
|
|
||||||
## CRITICAL RULES
|
|
||||||
|
|
||||||
- **ALWAYS** follow ADR-018 AI boundary policy (isolation on Admin Desktop)
|
|
||||||
- **ALWAYS** use RFA-First approach for AI implementation
|
|
||||||
- **NEVER** allow AI direct database/storage access
|
|
||||||
- **ALWAYS** implement human-in-the-loop validation
|
|
||||||
- **NEVER** send sensitive data to cloud AI services
|
|
||||||
|
|
||||||
## AI Integration Patterns
|
|
||||||
|
|
||||||
### Architecture Overview
|
|
||||||
|
|
||||||
```
|
|
||||||
Frontend → AI Gateway API → Admin Desktop (Ollama) → Backend Validation
|
|
||||||
```
|
|
||||||
|
|
||||||
### Key Components
|
|
||||||
|
|
||||||
| Component | Location | Purpose |
|
|
||||||
|-----------|----------|---------|
|
|
||||||
| **AI Gateway** | Backend (NestJS) | API endpoints, validation, audit logging |
|
|
||||||
| **Ollama Engine** | Admin Desktop (Desk-5439) | LLM inference (Gemma 4) |
|
|
||||||
| **OCR Engine** | Admin Desktop (Desk-5439) | Thai/English text extraction |
|
|
||||||
| **Orchestrator** | QNAP NAS (n8n) | Workflow management |
|
|
||||||
|
|
||||||
## Backend Implementation (NestJS)
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// AI Module with boundary enforcement
|
|
||||||
@Module({
|
|
||||||
controllers: [AiController],
|
|
||||||
providers: [AiService, AiGateway],
|
|
||||||
exports: [AiService],
|
|
||||||
})
|
|
||||||
export class AiModule {
|
|
||||||
constructor() {
|
|
||||||
// Enforce ADR-018 boundaries
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// AI Service with validation
|
|
||||||
@Injectable()
|
|
||||||
export class AiService {
|
|
||||||
async extractMetadata(documentId: string): Promise<AIMetadata> {
|
|
||||||
// 1. Validate permissions
|
|
||||||
// 2. Send to Admin Desktop AI
|
|
||||||
// 3. Validate AI response
|
|
||||||
// 4. Log audit trail
|
|
||||||
// 5. Return validated results
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Frontend Pattern (Next.js)
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Document Review Form (reusable component)
|
|
||||||
const DocumentReviewForm = ({ document, aiSuggestions }) => {
|
|
||||||
return (
|
|
||||||
<form>
|
|
||||||
<Field label="Document Type" suggestions={aiSuggestions.documentType} />
|
|
||||||
<Field label="Project Code" suggestions={aiSuggestions.projectCode} />
|
|
||||||
<Field label="Discipline" suggestions={aiSuggestions.discipline} />
|
|
||||||
|
|
||||||
<ConfidenceScore score={aiSuggestions.confidence} />
|
|
||||||
<HumanValidationActions />
|
|
||||||
</form>
|
|
||||||
);
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Security Requirements
|
|
||||||
|
|
||||||
- **AI Isolation:** All AI processing on Admin Desktop only
|
|
||||||
- **Data Privacy:** No cloud AI services, on-premises only
|
|
||||||
- **Audit Trail:** Log all AI interactions and human validations
|
|
||||||
- **Rate Limiting:** Prevent AI abuse and resource exhaustion
|
|
||||||
- **Validation:** All AI outputs must be validated before use
|
|
||||||
|
|
||||||
## Required Implementation
|
|
||||||
|
|
||||||
- [ ] AiModule with ADR-018 boundary enforcement
|
|
||||||
- [ ] AI Gateway API endpoints with validation
|
|
||||||
- [ ] DocumentReviewForm reusable component
|
|
||||||
- [ ] Admin Desktop Ollama + PaddleOCR setup
|
|
||||||
- [ ] n8n workflow orchestration
|
|
||||||
- [ ] AI audit logging and monitoring
|
|
||||||
- [ ] Human-in-the-loop validation workflows
|
|
||||||
|
|
||||||
## Related Documents
|
|
||||||
|
|
||||||
- `specs/06-Decision-Records/ADR-018-ai-boundary.md`
|
|
||||||
- `specs/06-Decision-Records/ADR-020-ai-intelligence-integration.md`
|
|
||||||
- `specs/06-Decision-Records/ADR-017-ollama-data-migration.md`
|
|
||||||
@@ -1,85 +0,0 @@
|
|||||||
---
|
|
||||||
description: Run the full speckit pipeline from specification to analysis in one command.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.all
|
|
||||||
|
|
||||||
This meta-workflow orchestrates the **complete development lifecycle**, from specification through implementation and validation. For the preparation-only pipeline (steps 1-5), use `/speckit.prepare` instead.
|
|
||||||
|
|
||||||
## Preparation Phase (Steps 1-5)
|
|
||||||
|
|
||||||
1. **Specify** (`/speckit.specify`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit.specify/SKILL.md`
|
|
||||||
- Execute with user's feature description
|
|
||||||
- Creates: `spec.md`
|
|
||||||
|
|
||||||
2. **Clarify** (`/speckit.clarify`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit.clarify/SKILL.md`
|
|
||||||
- Execute to resolve ambiguities
|
|
||||||
- Updates: `spec.md`
|
|
||||||
|
|
||||||
3. **Plan** (`/speckit.plan`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit.plan/SKILL.md`
|
|
||||||
- Execute to create technical design
|
|
||||||
- Creates: `plan.md`
|
|
||||||
|
|
||||||
4. **Tasks** (`/speckit.tasks`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit.tasks/SKILL.md`
|
|
||||||
- Execute to generate task breakdown
|
|
||||||
- Creates: `tasks.md`
|
|
||||||
|
|
||||||
5. **Analyze** (`/speckit.analyze`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit.analyze/SKILL.md`
|
|
||||||
- Execute to validate consistency across spec, plan, and tasks
|
|
||||||
- Output: Analysis report
|
|
||||||
- **Gate**: If critical issues found, stop and fix before proceeding
|
|
||||||
|
|
||||||
## Implementation Phase (Steps 6-7)
|
|
||||||
|
|
||||||
6. **Implement** (`/speckit.implement`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit.implement/SKILL.md`
|
|
||||||
- Execute all tasks from `tasks.md` with anti-regression protocols
|
|
||||||
- Output: Working implementation
|
|
||||||
|
|
||||||
7. **Check** (`/speckit.checker`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit.checker/SKILL.md`
|
|
||||||
- Run static analysis (linters, type checkers, security scanners)
|
|
||||||
- Output: Checker report
|
|
||||||
|
|
||||||
## Verification Phase (Steps 8-10)
|
|
||||||
|
|
||||||
8. **Test** (`/speckit.tester`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit.tester/SKILL.md`
|
|
||||||
- Run tests with coverage
|
|
||||||
- Output: Test + coverage report
|
|
||||||
|
|
||||||
9. **Review** (`/speckit.reviewer`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit.reviewer/SKILL.md`
|
|
||||||
- Perform code review
|
|
||||||
- Output: Review report with findings
|
|
||||||
|
|
||||||
10. **Validate** (`/speckit.validate`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit.validate/SKILL.md`
|
|
||||||
- Verify implementation matches spec requirements
|
|
||||||
- Output: Validation report (pass/fail)
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/speckit.all "Build a user authentication system with OAuth2 support"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Pipeline Comparison
|
|
||||||
|
|
||||||
| Pipeline | Steps | Use When |
|
|
||||||
| ------------------ | ------------------------- | -------------------------------------- |
|
|
||||||
| `/speckit.prepare` | 1-5 (Specify → Analyze) | Planning only — you'll implement later |
|
|
||||||
| `/speckit.all` | 1-10 (Specify → Validate) | Full lifecycle in one pass |
|
|
||||||
|
|
||||||
## On Error
|
|
||||||
|
|
||||||
If any step fails, stop the pipeline and report:
|
|
||||||
|
|
||||||
- Which step failed
|
|
||||||
- The error message
|
|
||||||
- Suggested remediation (e.g., "Run `/speckit.clarify` to resolve ambiguities before continuing")
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
---
|
|
||||||
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.constitution
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.constitution/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `.specify/` directory doesn't exist: Initialize the speckit structure first
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
---
|
|
||||||
description: Create or update the feature specification from a natural language feature description.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.specify
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
- This is typically the starting point of a new feature.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.specify/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the feature description for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If no feature description provided: Ask the user to describe the feature they want to specify
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
---
|
|
||||||
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.clarify
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.clarify/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `spec.md` is missing: Run `/speckit.specify` first to create the feature specification
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
---
|
|
||||||
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.plan
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.plan/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `spec.md` is missing: Run `/speckit.specify` first to create the feature specification
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
---
|
|
||||||
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.tasks
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tasks/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `plan.md` is missing: Run `/speckit.plan` first
|
|
||||||
- If `spec.md` is missing: Run `/speckit.specify` first
|
|
||||||
@@ -1,22 +0,0 @@
|
|||||||
---
|
|
||||||
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
|
||||||
---
|
|
||||||
|
|
||||||
// turbo-all
|
|
||||||
|
|
||||||
# Workflow: speckit.analyze
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.analyze/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `spec.md` is missing: Run `/speckit.specify` first
|
|
||||||
- If `plan.md` is missing: Run `/speckit.plan` first
|
|
||||||
- If `tasks.md` is missing: Run `/speckit.tasks` first
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
---
|
|
||||||
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.implement
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.implement/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `tasks.md` is missing: Run `/speckit.tasks` first
|
|
||||||
- If `plan.md` is missing: Run `/speckit.plan` first
|
|
||||||
- If `spec.md` is missing: Run `/speckit.specify` first
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
---
|
|
||||||
description: Run static analysis tools and aggregate results.
|
|
||||||
---
|
|
||||||
|
|
||||||
// turbo-all
|
|
||||||
|
|
||||||
# Workflow: speckit.checker
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user may specify paths to check or run on entire project.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checker/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If no linting tools available: Report which tools to install based on project type
|
|
||||||
- If tools fail: Show raw error and suggest config fixes
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
---
|
|
||||||
description: Execute tests, measure coverage, and report results.
|
|
||||||
---
|
|
||||||
|
|
||||||
// turbo-all
|
|
||||||
|
|
||||||
# Workflow: speckit.tester
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user may specify test paths, options, or just run all tests.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tester/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If no test framework detected: Report "No test framework found. Install Jest, Vitest, Pytest, or similar."
|
|
||||||
- If tests fail: Show failure details and suggest fixes
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
---
|
|
||||||
description: Perform code review with actionable feedback and suggestions.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.reviewer
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user may specify files to review, "staged" for git staged changes, or "branch" for branch diff.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.reviewer/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If no files to review: Ask user to stage changes or specify file paths
|
|
||||||
- If not a git repo: Review current directory files instead
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
---
|
|
||||||
description: Validate that implementation matches specification requirements.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.validate
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.validate/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `tasks.md` is missing: Run `/speckit.tasks` first
|
|
||||||
- If implementation not started: Run `/speckit.implement` first
|
|
||||||
@@ -1,51 +0,0 @@
|
|||||||
---
|
|
||||||
description: Create a new NestJS backend feature module following project standards
|
|
||||||
---
|
|
||||||
|
|
||||||
# Create NestJS Backend Module
|
|
||||||
|
|
||||||
Use this workflow when creating a new feature module in `backend/src/modules/`.
|
|
||||||
Follows `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` and ADR-005.
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
// turbo
|
|
||||||
|
|
||||||
1. **Verify requirements exist** — confirm the feature is in `specs/01-Requirements/` before starting
|
|
||||||
|
|
||||||
// turbo 2. **Check schema** — read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql` for relevant tables
|
|
||||||
|
|
||||||
3. **Scaffold module folder**
|
|
||||||
|
|
||||||
```
|
|
||||||
backend/src/modules/<module-name>/
|
|
||||||
├── <module-name>.module.ts
|
|
||||||
├── <module-name>.controller.ts
|
|
||||||
├── <module-name>.service.ts
|
|
||||||
├── dto/
|
|
||||||
│ ├── create-<module-name>.dto.ts
|
|
||||||
│ └── update-<module-name>.dto.ts
|
|
||||||
├── entities/
|
|
||||||
│ └── <module-name>.entity.ts
|
|
||||||
└── <module-name>.controller.spec.ts
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Create Entity** — map ONLY columns defined in the schema SQL. Use TypeORM decorators. Add `@VersionColumn()` if the entity needs optimistic locking.
|
|
||||||
|
|
||||||
5. **Create DTOs** — use `class-validator` decorators. Never use `any`. Validate all inputs.
|
|
||||||
|
|
||||||
6. **Create Service** — inject repository via constructor DI. Use transactions for multi-step writes. Add `Idempotency-Key` guard for POST/PUT/PATCH operations.
|
|
||||||
|
|
||||||
7. **Create Controller** — apply `@UseGuards(JwtAuthGuard, CaslAbilityGuard)`. Use proper HTTP status codes. Document with `@ApiTags` and `@ApiOperation`.
|
|
||||||
|
|
||||||
8. **Register in Module** — add to `imports`, `providers`, `controllers`, `exports` as needed.
|
|
||||||
|
|
||||||
9. **Register in AppModule** — import the new module in `app.module.ts`.
|
|
||||||
|
|
||||||
// turbo 10. **Write unit test** — cover service methods with Jest mocks. Run:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pnpm test:watch
|
|
||||||
```
|
|
||||||
|
|
||||||
// turbo 11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
|
|
||||||
@@ -1,64 +0,0 @@
|
|||||||
---
|
|
||||||
description: Create a new Next.js App Router page following project standards
|
|
||||||
---
|
|
||||||
|
|
||||||
# Create Next.js Frontend Page
|
|
||||||
|
|
||||||
Use this workflow when creating a new page in `frontend/app/`.
|
|
||||||
Follows `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`, ADR-011, ADR-012, ADR-013, ADR-014.
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
1. **Determine route** — decide the route path, e.g. `app/(dashboard)/documents/page.tsx`
|
|
||||||
|
|
||||||
2. **Classify components** — decide what is Server Component (default) vs Client Component (`'use client'`)
|
|
||||||
- Server Component: initial data load, static content, SEO
|
|
||||||
- Client Component: interactivity, forms, TanStack Query hooks, Zustand
|
|
||||||
|
|
||||||
3. **Create page file** — Server Component by default:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// app/(dashboard)/<route>/page.tsx
|
|
||||||
import { Metadata } from 'next';
|
|
||||||
|
|
||||||
export const metadata: Metadata = {
|
|
||||||
title: '<Page Title> | LCBP3-DMS',
|
|
||||||
};
|
|
||||||
|
|
||||||
export default async function <PageName>Page() {
|
|
||||||
return (
|
|
||||||
<div>
|
|
||||||
{/* Page content */}
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Create API hook** (if client-side data needed) — add to `hooks/use-<feature>.ts`:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
'use client';
|
|
||||||
import { useQuery } from '@tanstack/react-query';
|
|
||||||
import { apiClient } from '@/lib/api-client';
|
|
||||||
|
|
||||||
export function use<Feature>() {
|
|
||||||
return useQuery({
|
|
||||||
queryKey: ['<feature>'],
|
|
||||||
queryFn: () => apiClient.get('<endpoint>'),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Build UI components** — use Shadcn/UI primitives. Place reusable components in `components/<feature>/`.
|
|
||||||
|
|
||||||
6. **Handle forms** — use React Hook Form + Zod schema validation. Never access form values without validation.
|
|
||||||
|
|
||||||
7. **Handle errors** — add `error.tsx` alongside `page.tsx` for route-level error boundaries.
|
|
||||||
|
|
||||||
8. **Add loading state** — add `loading.tsx` for Suspense fallback if page does async work.
|
|
||||||
|
|
||||||
9. **Add to navigation** — update sidebar/nav config if the page should appear in the menu.
|
|
||||||
|
|
||||||
10. **Access control** — ensure page checks CASL permissions. Redirect unauthorized users via middleware or `notFound()`.
|
|
||||||
|
|
||||||
11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`
|
|
||||||
@@ -1,71 +0,0 @@
|
|||||||
---
|
|
||||||
description: Deploy the application via Gitea Actions to QNAP Container Station
|
|
||||||
---
|
|
||||||
|
|
||||||
# Deploy to Production
|
|
||||||
|
|
||||||
Use this workflow to deploy updated backend and/or frontend to QNAP via Gitea Actions CI/CD.
|
|
||||||
Follows `specs/04-Infrastructure-OPS/` and ADR-015.
|
|
||||||
|
|
||||||
## Pre-deployment Checklist
|
|
||||||
|
|
||||||
- [ ] All tests pass locally (`pnpm test:watch`)
|
|
||||||
- [ ] No TypeScript errors (`tsc --noEmit`)
|
|
||||||
- [ ] No `any` types introduced
|
|
||||||
- [ ] Schema changes applied to `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`
|
|
||||||
- [ ] Environment variables documented (NOT in `.env` files)
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
1. **Commit and push to Gitea**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git status
|
|
||||||
git add .
|
|
||||||
git commit -m "feat(<scope>): <description>"
|
|
||||||
git push origin main
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Monitor Gitea Actions** — open Gitea web UI → Actions tab → verify pipeline starts
|
|
||||||
|
|
||||||
3. **Pipeline stages (automatic)**
|
|
||||||
- `build-backend` → Docker image build + push to registry
|
|
||||||
- `build-frontend` → Docker image build + push to registry
|
|
||||||
- `deploy` → SSH to QNAP → `docker compose pull` + `docker compose up -d`
|
|
||||||
|
|
||||||
4. **Verify backend health**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl http://<QNAP_IP>:3000/health
|
|
||||||
# Expected: { "status": "ok" }
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Verify frontend**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl -I http://<QNAP_IP>:3001
|
|
||||||
# Expected: HTTP 200
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **Check logs in Grafana** — navigate to Grafana → Loki → filter by container name
|
|
||||||
- Backend: `container_name="lcbp3-backend"`
|
|
||||||
- Frontend: `container_name="lcbp3-frontend"`
|
|
||||||
|
|
||||||
7. **Verify database** — confirm schema changes are reflected (if any)
|
|
||||||
|
|
||||||
8. **Rollback (if needed)**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# SSH into QNAP
|
|
||||||
docker compose pull <service>=<previous-image-tag>
|
|
||||||
docker compose up -d <service>
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Issues
|
|
||||||
|
|
||||||
| Symptom | Cause | Fix |
|
|
||||||
| ----------------- | --------------------- | ----------------------------------- |
|
|
||||||
| Backend unhealthy | DB connection failed | Check MariaDB container + env vars |
|
|
||||||
| Frontend blank | Build error | Check Next.js build logs in Grafana |
|
|
||||||
| 502 Bad Gateway | Container not started | `docker compose ps` to check status |
|
|
||||||
| Pipeline stuck | Gitea runner offline | Restart runner on QNAP |
|
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
---
|
|
||||||
auto_execution_mode: 0
|
|
||||||
description: Review code changes for bugs, security issues, and improvements
|
|
||||||
---
|
|
||||||
|
|
||||||
You are a senior software engineer performing a thorough code review to identify potential bugs.
|
|
||||||
|
|
||||||
Your task is to find all potential bugs and code improvements in the code changes. Focus on:
|
|
||||||
|
|
||||||
1. Logic errors and incorrect behavior
|
|
||||||
2. Edge cases that aren't handled
|
|
||||||
3. Null/undefined reference issues
|
|
||||||
4. Race conditions or concurrency issues
|
|
||||||
5. Security vulnerabilities
|
|
||||||
6. Improper resource management or resource leaks
|
|
||||||
7. API contract violations
|
|
||||||
8. Incorrect caching behavior, including cache staleness issues, cache key-related bugs, incorrect cache invalidation, and ineffective caching
|
|
||||||
9. Violations of existing code patterns or conventions
|
|
||||||
|
|
||||||
## 🔴 Tier 1 Critical Rules (CI Blockers)
|
|
||||||
|
|
||||||
The following are **CI-blocking issues** that must be caught in code review. These align with project specs in `specs/05-Engineering-Guidelines/` and `specs/06-Decision-Records/`:
|
|
||||||
|
|
||||||
### ADR-019: UUID Handling
|
|
||||||
|
|
||||||
- **❌ NEVER use `parseInt()`, `Number()`, or `+` operator on UUID values**
|
|
||||||
- Example of violation: `parseInt(projectId)` where `projectId` is UUID string
|
|
||||||
- ✅ Correct: Use UUID string directly without conversion
|
|
||||||
- **❌ NEVER expose internal INT PK in API responses**
|
|
||||||
- API must expose only `publicId` (transformed to `id` via `@Expose()`)
|
|
||||||
- Verify DTOs have `@Exclude()` on `id: number` field
|
|
||||||
|
|
||||||
### TypeScript Strict Rules
|
|
||||||
|
|
||||||
- **❌ ZERO `any` types allowed** — use proper types or `unknown` + narrowing
|
|
||||||
- **❌ ZERO `console.log`** — must use NestJS `Logger` (backend) or remove (frontend)
|
|
||||||
- **❌ NO `req: any` in controllers** — use `RequestWithUser` typed interface
|
|
||||||
|
|
||||||
### Database & Architecture
|
|
||||||
|
|
||||||
- **❌ NO SQL Triggers for business logic** — use NestJS Service methods instead
|
|
||||||
- **❌ NO `.env` files in production** — use Docker environment variables
|
|
||||||
- **❌ NO direct table/column name invention** — verify against `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
|
|
||||||
|
|
||||||
### Security (ADR-016)
|
|
||||||
|
|
||||||
- Idempotency validation for critical `POST`/`PUT`/`PATCH` endpoints
|
|
||||||
- Two-phase file upload pattern (Upload → Temp → Commit → Permanent)
|
|
||||||
- Input validation with class-validator (backend) and Zod (frontend)
|
|
||||||
|
|
||||||
### Test Coverage Requirements
|
|
||||||
|
|
||||||
- **Backend Services:** 80% minimum
|
|
||||||
- **Backend Overall:** 70% minimum
|
|
||||||
- **Business Logic:** 80% minimum
|
|
||||||
|
|
||||||
Make sure to:
|
|
||||||
|
|
||||||
1. If exploring the codebase, call multiple tools in parallel for increased efficiency. Do not spend too much time exploring.
|
|
||||||
2. If you find any pre-existing bugs in the code, you should also report those since it's important for us to maintain general code quality for the user.
|
|
||||||
3. Do NOT report issues that are speculative or low-confidence. All your conclusions should be based on a complete understanding of the codebase.
|
|
||||||
4. Remember that if you were given a specific git commit, it may not be checked out and local code states may be different.
|
|
||||||
@@ -1,108 +0,0 @@
|
|||||||
---
|
|
||||||
description: Manage database schema changes following ADR-009 (no migrations, modify SQL directly)
|
|
||||||
---
|
|
||||||
|
|
||||||
# Schema Change Workflow
|
|
||||||
|
|
||||||
Use this workflow when modifying database schema for LCBP3-DMS.
|
|
||||||
Follows `specs/06-Decision-Records/ADR-009-database-strategy.md` — **NO TypeORM migrations**.
|
|
||||||
|
|
||||||
## Pre-Change Checklist
|
|
||||||
|
|
||||||
- [ ] Change is required by a spec in `specs/01-Requirements/`
|
|
||||||
- [ ] Existing data impact has been assessed
|
|
||||||
- [ ] No SQL triggers are being added (business logic in NestJS only)
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
1. **Read current schema** — load the full schema file:
|
|
||||||
|
|
||||||
```
|
|
||||||
specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Read data dictionary** — understand current field definitions:
|
|
||||||
|
|
||||||
```
|
|
||||||
specs/03-Data-and-Storage/03-01-data-dictionary.md
|
|
||||||
```
|
|
||||||
|
|
||||||
// turbo 3. **Identify impact scope** — determine which tables, columns, indexes, or constraints are affected. List:
|
|
||||||
|
|
||||||
- Tables being modified/created
|
|
||||||
- Columns being added/renamed/dropped
|
|
||||||
- Foreign key relationships affected
|
|
||||||
- Indexes being added/modified
|
|
||||||
- Seed data impact (if any)
|
|
||||||
|
|
||||||
4. **Modify schema SQL** — edit `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`:
|
|
||||||
- Add/modify table definitions
|
|
||||||
- Maintain consistent formatting (uppercase SQL keywords, lowercase identifiers)
|
|
||||||
- Add inline comments for new columns explaining purpose
|
|
||||||
- Ensure `DEFAULT` values and `NOT NULL` constraints are correct
|
|
||||||
- Add `version` column with `@VersionColumn()` marker comment if optimistic locking is needed
|
|
||||||
|
|
||||||
> [!CAUTION]
|
|
||||||
> **NEVER use SQL Triggers.** All business logic must live in NestJS services.
|
|
||||||
|
|
||||||
5. **Update data dictionary** — edit `specs/03-Data-and-Storage/03-01-data-dictionary.md`:
|
|
||||||
- Add new tables/columns with descriptions
|
|
||||||
- Update data types and constraints
|
|
||||||
- Document business rules for new fields
|
|
||||||
- Add enum value definitions if applicable
|
|
||||||
|
|
||||||
6. **Update seed data** (if applicable):
|
|
||||||
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-basic.sql` — for reference/lookup data
|
|
||||||
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` — for new CASL permissions
|
|
||||||
|
|
||||||
7. **Update TypeORM entity** — modify corresponding `backend/src/modules/<module>/entities/*.entity.ts`:
|
|
||||||
- Map ONLY columns defined in schema SQL
|
|
||||||
- Use correct TypeORM decorators (`@Column`, `@PrimaryGeneratedColumn`, `@ManyToOne`, etc.)
|
|
||||||
- Add `@VersionColumn()` if optimistic locking is needed
|
|
||||||
|
|
||||||
8. **Update DTOs** — if new columns are exposed via API:
|
|
||||||
- Add fields to `create-*.dto.ts` and/or `update-*.dto.ts`
|
|
||||||
- Add `class-validator` decorators for all new fields
|
|
||||||
- Never use `any` type
|
|
||||||
|
|
||||||
// turbo 9. **Run type check** — verify no TypeScript errors:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd backend && npx tsc --noEmit
|
|
||||||
```
|
|
||||||
|
|
||||||
10. **Generate SQL diff** — create a summary of changes for the user to apply manually:
|
|
||||||
|
|
||||||
```
|
|
||||||
-- Schema Change Summary
|
|
||||||
-- Date: <current date>
|
|
||||||
-- Feature: <feature name>
|
|
||||||
-- Tables affected: <list>
|
|
||||||
--
|
|
||||||
-- ⚠️ Apply this SQL to the live database manually:
|
|
||||||
|
|
||||||
ALTER TABLE ...;
|
|
||||||
-- or
|
|
||||||
CREATE TABLE ...;
|
|
||||||
```
|
|
||||||
|
|
||||||
11. **Notify user** — present the SQL diff and remind them:
|
|
||||||
- Apply the SQL change to the live database manually
|
|
||||||
- Verify the change doesn't break existing data
|
|
||||||
- Run `pnpm test` after applying to confirm entity mappings work
|
|
||||||
|
|
||||||
## Common Patterns
|
|
||||||
|
|
||||||
| Change Type | Template |
|
|
||||||
| ----------- | -------------------------------------------------------------- |
|
|
||||||
| Add column | `ALTER TABLE \`table\` ADD COLUMN \`col\` TYPE DEFAULT value;` |
|
|
||||||
| Add table | Full `CREATE TABLE` with constraints and indexes |
|
|
||||||
| Add index | `CREATE INDEX \`idx_table_col\` ON \`table\` (\`col\`);` |
|
|
||||||
| Add FK | `ALTER TABLE \`child\` ADD CONSTRAINT ... FOREIGN KEY ...` |
|
|
||||||
| Add enum | Add to data dictionary + `ENUM('val1','val2')` in column def |
|
|
||||||
|
|
||||||
## On Error
|
|
||||||
|
|
||||||
- If schema SQL has syntax errors → fix and re-validate with `tsc --noEmit`
|
|
||||||
- If entity mapping doesn't match schema → compare column-by-column against SQL
|
|
||||||
- If seed data conflicts → check unique constraints and foreign keys
|
|
||||||
@@ -1,27 +0,0 @@
|
|||||||
---
|
|
||||||
description: Execute the full preparation pipeline (Specify -> Clarify -> Plan -> Tasks -> Analyze) in sequence.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.prepare
|
|
||||||
|
|
||||||
This workflow orchestrates the sequential execution of the Speckit preparation phase skills (02-06).
|
|
||||||
|
|
||||||
1. **Step 1: Specify (Skill 02)**
|
|
||||||
- Goal: Create or update the `spec.md` based on user input.
|
|
||||||
- Action: Read and execute `.agents/skills/speckit.specify/SKILL.md`.
|
|
||||||
|
|
||||||
2. **Step 2: Clarify (Skill 03)**
|
|
||||||
- Goal: Refine the `spec.md` by identifying and resolving ambiguities.
|
|
||||||
- Action: Read and execute `.agents/skills/speckit.clarify/SKILL.md`.
|
|
||||||
|
|
||||||
3. **Step 3: Plan (Skill 04)**
|
|
||||||
- Goal: Generate `plan.md` from the finalized spec.
|
|
||||||
- Action: Read and execute `.agents/skills/speckit.plan/SKILL.md`.
|
|
||||||
|
|
||||||
4. **Step 4: Tasks (Skill 05)**
|
|
||||||
- Goal: Generate actionable `tasks.md` from the plan.
|
|
||||||
- Action: Read and execute `.agents/skills/speckit.tasks/SKILL.md`.
|
|
||||||
|
|
||||||
5. **Step 5: Analyze (Skill 06)**
|
|
||||||
- Goal: Validate consistency across all design artifacts (spec, plan, tasks).
|
|
||||||
- Action: Read and execute `.agents/skills/speckit.analyze/SKILL.md`.
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
---
|
|
||||||
description: Generate a custom checklist for the current feature based on user requirements.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.checklist
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checklist/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `spec.md` is missing: Run `/speckit.specify` first to create the feature specification
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
---
|
|
||||||
description: Compare two versions of a spec or plan to highlight changes.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.diff
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt (optional file paths or version references).
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.diff/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If no files to compare: Use current feature's `spec.md` vs git HEAD
|
|
||||||
- If `spec.md` doesn't exist: Run `/speckit.specify` first
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
---
|
|
||||||
description: Migrate existing projects into the speckit structure by generating spec.md, plan.md, and tasks.md from existing code.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.migrate
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt (path to analyze, feature name).
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.migrate/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If path doesn't exist: Ask user to provide valid directory path
|
|
||||||
- If no code found: Report that no analyzable code was detected
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
---
|
|
||||||
description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues.
|
|
||||||
---
|
|
||||||
|
|
||||||
// turbo-all
|
|
||||||
|
|
||||||
# Workflow: speckit.quizme
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.quizme/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If required files don't exist, inform the user which prerequisite workflow to run first (e.g., `/speckit.specify` to create `spec.md`).
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
---
|
|
||||||
description: Display a dashboard showing feature status, completion percentage, and blockers.
|
|
||||||
---
|
|
||||||
|
|
||||||
// turbo-all
|
|
||||||
|
|
||||||
# Workflow: speckit.status
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user may optionally specify a feature to focus on.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.status/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If no features exist: Report "No features found. Run `/speckit.specify` to create your first feature."
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
---
|
|
||||||
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit.taskstoissues
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.taskstoissues/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `tasks.md` is missing: Run `/speckit.tasks` first
|
|
||||||
+180
-28
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
> **The Event Horizon of Software Quality.**
|
> **The Event Horizon of Software Quality.**
|
||||||
> _Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit)._
|
> _Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit)._
|
||||||
> _Version: 1.2.0 — LCBP3-DMS Edition (v1.8.1 UAT Ready)_
|
> _Version: 1.8.6 — LCBP3-DMS Edition (v1.8.6 Production Ready)_
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -55,7 +55,7 @@ Some skills and scripts reference a `.specify/` directory for templates and proj
|
|||||||
The toolkit is organized into modular components that provide both the logic (Scripts) and the structure (Templates) for the agent.
|
The toolkit is organized into modular components that provide both the logic (Scripts) and the structure (Templates) for the agent.
|
||||||
|
|
||||||
```text
|
```text
|
||||||
.agents/
|
.agents/ # Agent Skills & Rules
|
||||||
├── skills/ # @ Mentions (Agent Intelligence)
|
├── skills/ # @ Mentions (Agent Intelligence)
|
||||||
│ ├── nestjs-best-practices/ # NestJS Architecture Patterns
|
│ ├── nestjs-best-practices/ # NestJS Architecture Patterns
|
||||||
│ ├── next-best-practices/ # Next.js App Router Patterns
|
│ ├── next-best-practices/ # Next.js App Router Patterns
|
||||||
@@ -78,32 +78,37 @@ The toolkit is organized into modular components that provide both the logic (Sc
|
|||||||
│ ├── speckit-tester/ # Test Runner & Coverage
|
│ ├── speckit-tester/ # Test Runner & Coverage
|
||||||
│ └── speckit-validate/ # Implementation Validator
|
│ └── speckit-validate/ # Implementation Validator
|
||||||
│
|
│
|
||||||
├── workflows/ # / Slash Commands (Orchestration)
|
├── rules/ # Project Context & Validation Rules
|
||||||
│ ├── 00-speckit-all.md # Full Pipeline (10 steps: Specify → Validate)
|
│ ├── 00-project-context.md # Role, Persona, Rule Tiers
|
||||||
│ ├── 01–11-speckit-*.md # Individual phase workflows
|
│ ├── 01-adr-019-uuid.md # UUID Strategy (Critical)
|
||||||
│ ├── speckit-prepare.md # Prep Pipeline (5 steps: Specify → Analyze)
|
│ ├── 02-security.md # Security Requirements
|
||||||
│ ├── schema-change.md # DB Schema Change (ADR-009)
|
│ ├── 03-typescript.md # TypeScript Standards
|
||||||
│ ├── create-backend-module.md # NestJS Module Scaffolding
|
│ ├── 04-domain-terminology.md # DMS Glossary Compliance
|
||||||
│ ├── create-frontend-page.md # Next.js Page Scaffolding
|
│ ├── 05-forbidden-actions.md # Critical Prohibited Patterns
|
||||||
│ ├── deploy.md # Deployment via Gitea CI/CD
|
│ ├── 06-backend-patterns.md # NestJS Architecture Rules
|
||||||
│ └── util-speckit-*.md # Utilities (checklist, diff, migrate, etc.)
|
│ ├── 07-frontend-patterns.md # Next.js App Router Rules
|
||||||
|
│ ├── 08-development-flow.md # Development Workflow
|
||||||
|
│ ├── 09-commit-checklist.md # Pre-commit Validation
|
||||||
|
│ ├── 10-error-handling.md # ADR-007 Compliance
|
||||||
|
│ └── 11-ai-integration.md # ADR-018/020 AI Boundaries
|
||||||
│
|
│
|
||||||
└── scripts/
|
└── scripts/
|
||||||
├── bash/ # Bash Core (Kinetic logic)
|
├── bash/ # Bash Core (Kinetic logic)
|
||||||
│ ├── common.sh # Shared utilities & path resolution
|
|
||||||
│ ├── check-prerequisites.sh # Prerequisite validation
|
|
||||||
│ ├── create-new-feature.sh # Feature branch creation
|
|
||||||
│ ├── setup-plan.sh # Plan template setup
|
|
||||||
│ ├── update-agent-context.sh # Agent file updater (main)
|
|
||||||
│ ├── plan-parser.sh # Plan data extraction (module)
|
|
||||||
│ ├── content-generator.sh # Language-specific templates (module)
|
|
||||||
│ └── agent-registry.sh # 17-agent type registry (module)
|
|
||||||
├── powershell/ # PowerShell Equivalents (Windows-native)
|
├── powershell/ # PowerShell Equivalents (Windows-native)
|
||||||
│ ├── common.ps1 # Shared utilities & prerequisites
|
|
||||||
│ └── create-new-feature.ps1 # Feature branch creation
|
|
||||||
├── fix_links.py # Spec link fixer
|
├── fix_links.py # Spec link fixer
|
||||||
├── verify_links.py # Spec link verifier
|
├── verify_links.py # Spec link verifier
|
||||||
└── start-mcp.js # MCP server launcher
|
└── start-mcp.js # MCP server launcher
|
||||||
|
|
||||||
|
.windsurf/workflows/ # / Slash Commands (Orchestration)
|
||||||
|
├── 00-speckit.all.md # Full Pipeline (10 steps: Specify → Validate)
|
||||||
|
├── 01–11-speckit-*.md # Individual phase workflows
|
||||||
|
├── speckit-prepare.md # Prep Pipeline (5 steps: Specify → Analyze)
|
||||||
|
├── schema-change.md # DB Schema Change (ADR-009)
|
||||||
|
├── create-backend-module.md # NestJS Module Scaffolding
|
||||||
|
├── create-frontend-page.md # Next.js Page Scaffolding
|
||||||
|
├── deploy.md # Deployment via Gitea CI/CD
|
||||||
|
├── review.md # Code Review Workflow
|
||||||
|
└── util-speckit-*.md # Utilities (checklist, diff, migrate, etc.)
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -254,19 +259,19 @@ If you change your mind mid-project:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🏗️ LCBP3-DMS Project Notes (v1.8.1)
|
## 🏗️ LCBP3-DMS Project Notes (v1.8.6)
|
||||||
|
|
||||||
### 📊 Current Status: UAT Ready (2026-03-11)
|
### 📊 Current Status: Production Ready (2026-04-14)
|
||||||
|
|
||||||
| Area | Status |
|
| Area | Status |
|
||||||
| ------------- | ------------------------------------- |
|
| ------------- | ------------------------------- |
|
||||||
| Backend | ✅ 18 Modules, Production Ready |
|
| Backend | ✅ 18 Modules, Production Ready |
|
||||||
| Frontend | ✅ 100% Complete |
|
| Frontend | ✅ 100% Complete |
|
||||||
| Database | ✅ Schema v1.8.0 Stable |
|
| Database | ✅ Schema v1.8.6 Stable |
|
||||||
| Documentation | ✅ **10/10 Gaps Closed** |
|
| Documentation | ✅ **10/10 Gaps Closed** |
|
||||||
| AI Migration | 🔄 Pre-migration Setup (n8n + Ollama) |
|
| AI Migration | ✅ Ollama Integration Complete |
|
||||||
| UAT | 🔄 In Progress |
|
| UAT | ✅ Completed Successfully |
|
||||||
| Deployment | 📋 Pending Go-Live |
|
| Deployment | ✅ Production Deployed |
|
||||||
|
|
||||||
### 📁 Key Spec Files (Always Check Before Writing Code)
|
### 📁 Key Spec Files (Always Check Before Writing Code)
|
||||||
|
|
||||||
@@ -300,4 +305,151 @@ If you change your mind mid-project:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## 🔧 Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues & Solutions
|
||||||
|
|
||||||
|
#### **Version Inconsistency Errors**
|
||||||
|
|
||||||
|
**Problem**: Scripts report version mismatches between files.
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run version validation
|
||||||
|
./scripts/bash/validate-versions.sh
|
||||||
|
|
||||||
|
# Fix by updating all files to v1.8.6
|
||||||
|
# Then re-run validation to confirm
|
||||||
|
```
|
||||||
|
|
||||||
|
**Files to check**:
|
||||||
|
|
||||||
|
- `.agents/README.md`
|
||||||
|
- `.agents/skills/VERSION`
|
||||||
|
- `.agents/rules/00-project-context.md`
|
||||||
|
- `.agents/skills/skills.md`
|
||||||
|
|
||||||
|
#### **Missing Workflow Files**
|
||||||
|
|
||||||
|
**Problem**: Workflows not found in `.windsurf/workflows/`.
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Sync workflow check
|
||||||
|
./scripts/bash/sync-workflows.sh
|
||||||
|
|
||||||
|
# Verify all 23 expected workflows are present
|
||||||
|
# Create missing ones from templates if needed
|
||||||
|
```
|
||||||
|
|
||||||
|
#### **Skill Health Issues**
|
||||||
|
|
||||||
|
**Problem**: Skills missing SKILL.md or required sections.
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run comprehensive skill audit
|
||||||
|
./scripts/bash/audit-skills.sh
|
||||||
|
|
||||||
|
# Check specific skill issues
|
||||||
|
# Missing files will be listed with specific errors
|
||||||
|
```
|
||||||
|
|
||||||
|
**Required SKILL.md sections**:
|
||||||
|
|
||||||
|
- Front matter: `name`, `description`, `version`
|
||||||
|
- Content: `## Role`, `## Task`
|
||||||
|
|
||||||
|
#### **Script Permission Issues**
|
||||||
|
|
||||||
|
**Problem**: Bash scripts not executable.
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Make scripts executable
|
||||||
|
chmod +x .agents/scripts/bash/*.sh
|
||||||
|
|
||||||
|
# Verify with
|
||||||
|
ls -la .agents/scripts/bash/
|
||||||
|
```
|
||||||
|
|
||||||
|
#### **PowerShell Execution Policy**
|
||||||
|
|
||||||
|
**Problem**: PowerShell scripts blocked by execution policy.
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# Check current policy
|
||||||
|
Get-ExecutionPolicy
|
||||||
|
|
||||||
|
# Allow scripts for current user
|
||||||
|
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
|
||||||
|
|
||||||
|
# Or run bypass for single script
|
||||||
|
PowerShell -ExecutionPolicy Bypass -File .agents/scripts/powershell/audit-skills.ps1
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debug Mode
|
||||||
|
|
||||||
|
**Enable verbose output**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run scripts with debug info
|
||||||
|
bash -x .agents/scripts/bash/audit-skills.sh
|
||||||
|
|
||||||
|
# PowerShell with verbose output
|
||||||
|
$VerbosePreference = "Continue"
|
||||||
|
. .agents/scripts/powershell/audit-skills.ps1
|
||||||
|
```
|
||||||
|
|
||||||
|
### Health Check Commands
|
||||||
|
|
||||||
|
**Quick health assessment**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Check versions
|
||||||
|
./scripts/bash/validate-versions.sh
|
||||||
|
|
||||||
|
# 2. Audit skills
|
||||||
|
./scripts/bash/audit-skills.sh
|
||||||
|
|
||||||
|
# 3. Sync workflows
|
||||||
|
./scripts/bash/sync-workflows.sh
|
||||||
|
|
||||||
|
# 4. Check directory structure
|
||||||
|
find .agents -type f -name "*.md" | wc -l
|
||||||
|
find .windsurf/workflows -name "*.md" | wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
**PowerShell equivalent**:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# 1. Check versions
|
||||||
|
. .agents/scripts/powershell/validate-versions.ps1
|
||||||
|
|
||||||
|
# 2. Audit skills
|
||||||
|
. .agents/scripts/powershell/audit-skills.ps1
|
||||||
|
|
||||||
|
# 3. Count files
|
||||||
|
(Get-ChildItem -Path .agents -Recurse -Filter "*.md").Count
|
||||||
|
(Get-ChildItem -Path .windsurf/workflows -Filter "*.md").Count
|
||||||
|
```
|
||||||
|
|
||||||
|
### Getting Help
|
||||||
|
|
||||||
|
**If issues persist**:
|
||||||
|
|
||||||
|
1. Check LCBP3 project version alignment
|
||||||
|
2. Verify `.specify/` directory structure (if using templates)
|
||||||
|
3. Ensure all dependencies are installed (bash, powershell core)
|
||||||
|
4. Review the specific error messages in script output
|
||||||
|
5. Check this README for workflow path updates (`.windsurf/workflows`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
_Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity._
|
_Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity._
|
||||||
|
|||||||
@@ -0,0 +1,571 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
/**
|
||||||
|
* advanced-validator.js - Advanced validation capabilities for .agents
|
||||||
|
* Part of LCBP3-DMS Phase 3 enhancements
|
||||||
|
*/
|
||||||
|
|
||||||
|
const fs = require('fs');
|
||||||
|
const path = require('path');
|
||||||
|
const yaml = require('js-yaml');
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
const BASE_DIR = path.resolve(__dirname, '../..');
|
||||||
|
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
|
||||||
|
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
|
||||||
|
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
|
||||||
|
|
||||||
|
// Advanced validation class
|
||||||
|
class AdvancedValidator {
|
||||||
|
constructor() {
|
||||||
|
this.validationResults = {
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
validations: {},
|
||||||
|
summary: {
|
||||||
|
total_validations: 0,
|
||||||
|
passed_validations: 0,
|
||||||
|
failed_validations: 0,
|
||||||
|
warnings: 0,
|
||||||
|
critical_issues: 0
|
||||||
|
}
|
||||||
|
};
|
||||||
|
this.criticalIssues = [];
|
||||||
|
}
|
||||||
|
|
||||||
|
log(message, level = 'info') {
|
||||||
|
const colors = {
|
||||||
|
info: '\x1b[36m', // Cyan
|
||||||
|
pass: '\x1b[32m', // Green
|
||||||
|
fail: '\x1b[31m', // Red
|
||||||
|
warn: '\x1b[33m', // Yellow
|
||||||
|
critical: '\x1b[35m', // Magenta
|
||||||
|
reset: '\x1b[0m'
|
||||||
|
};
|
||||||
|
|
||||||
|
const color = colors[level] || colors.info;
|
||||||
|
console.log(`${color}[${level.toUpperCase()}] ${message}${colors.reset}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
validateSkillFrontMatter(skillPath, skillName) {
|
||||||
|
const skillMdPath = path.join(skillPath, 'SKILL.md');
|
||||||
|
|
||||||
|
if (!fs.existsSync(skillMdPath)) {
|
||||||
|
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
|
||||||
|
message: 'SKILL.md file not found',
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = fs.readFileSync(skillMdPath, 'utf8');
|
||||||
|
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
|
||||||
|
|
||||||
|
if (!frontMatterMatch) {
|
||||||
|
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
|
||||||
|
message: 'No front matter found',
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const frontMatter = yaml.load(frontMatterMatch[1]);
|
||||||
|
const requiredFields = ['name', 'description', 'version'];
|
||||||
|
const missingFields = requiredFields.filter(field => !frontMatter[field]);
|
||||||
|
|
||||||
|
if (missingFields.length > 0) {
|
||||||
|
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
|
||||||
|
message: `Missing required fields: ${missingFields.join(', ')}`,
|
||||||
|
missing_fields: missingFields,
|
||||||
|
front_matter: frontMatter,
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate version format
|
||||||
|
const versionPattern = /^\d+\.\d+\.\d+$/;
|
||||||
|
if (!versionPattern.test(frontMatter.version)) {
|
||||||
|
this.addValidationResult(`skill_${skillName}_version_format`, 'warn', {
|
||||||
|
message: 'Version format should be X.Y.Z',
|
||||||
|
version: frontMatter.version,
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate dependencies if present
|
||||||
|
if (frontMatter['depends-on']) {
|
||||||
|
const dependencies = Array.isArray(frontMatter['depends-on'])
|
||||||
|
? frontMatter['depends-on']
|
||||||
|
: [frontMatter['depends-on']];
|
||||||
|
|
||||||
|
for (const dep of dependencies) {
|
||||||
|
const depPath = path.join(SKILLS_DIR, dep);
|
||||||
|
if (!fs.existsSync(depPath)) {
|
||||||
|
this.addValidationResult(`skill_${skillName}_dependency_${dep}`, 'critical', {
|
||||||
|
message: `Dependency not found: ${dep}`,
|
||||||
|
dependency: dep,
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
this.addValidationResult(`skill_${skillName}_frontmatter`, 'pass', {
|
||||||
|
message: 'Front matter is valid',
|
||||||
|
front_matter: frontMatter,
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
return true;
|
||||||
|
|
||||||
|
} catch (yamlError) {
|
||||||
|
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
|
||||||
|
message: `Invalid YAML in front matter: ${yamlError.message}`,
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
|
||||||
|
message: `Error reading SKILL.md: ${error.message}`,
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
validateSkillContent(skillPath, skillName) {
|
||||||
|
const skillMdPath = path.join(skillPath, 'SKILL.md');
|
||||||
|
|
||||||
|
if (!fs.existsSync(skillMdPath)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = fs.readFileSync(skillMdPath, 'utf8');
|
||||||
|
|
||||||
|
// Check for required sections
|
||||||
|
const requiredSections = ['## Role', '## Task'];
|
||||||
|
const missingSections = requiredSections.filter(section => !content.includes(section));
|
||||||
|
|
||||||
|
if (missingSections.length > 0) {
|
||||||
|
this.addValidationResult(`skill_${skillName}_content`, 'fail', {
|
||||||
|
message: `Missing required sections: ${missingSections.join(', ')}`,
|
||||||
|
missing_sections: missingSections,
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for forbidden patterns
|
||||||
|
const forbiddenPatterns = [
|
||||||
|
{ pattern: /TODO.*FIX/gi, message: 'TODO items should be resolved' },
|
||||||
|
{ pattern: /FIXME/gi, message: 'FIXME items should be addressed' },
|
||||||
|
{ pattern: /XXX/gi, message: 'XXX markers should be replaced' }
|
||||||
|
];
|
||||||
|
|
||||||
|
for (const { pattern, message } of forbiddenPatterns) {
|
||||||
|
if (pattern.test(content)) {
|
||||||
|
this.addValidationResult(`skill_${skillName}_forbidden_patterns`, 'warn', {
|
||||||
|
message: `${message} found in content`,
|
||||||
|
pattern: pattern.toString(),
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate content length
|
||||||
|
const contentLength = content.length;
|
||||||
|
if (contentLength < 500) {
|
||||||
|
this.addValidationResult(`skill_${skillName}_content_length`, 'warn', {
|
||||||
|
message: 'Skill content seems too short',
|
||||||
|
length: contentLength,
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
this.addValidationResult(`skill_${skillName}_content`, 'pass', {
|
||||||
|
message: 'Skill content is valid',
|
||||||
|
length: contentLength,
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
return true;
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
this.addValidationResult(`skill_${skillName}_content`, 'fail', {
|
||||||
|
message: `Error validating content: ${error.message}`,
|
||||||
|
path: skillMdPath
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
validateWorkflowStructure(workflowPath, workflowName) {
|
||||||
|
if (!fs.existsSync(workflowPath)) {
|
||||||
|
this.addValidationResult(`workflow_${workflowName}_exists`, 'fail', {
|
||||||
|
message: 'Workflow file not found',
|
||||||
|
path: workflowPath
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = fs.readFileSync(workflowPath, 'utf8');
|
||||||
|
|
||||||
|
// Check for markdown headers
|
||||||
|
if (!content.includes('#')) {
|
||||||
|
this.addValidationResult(`workflow_${workflowName}_structure`, 'fail', {
|
||||||
|
message: 'No markdown headers found',
|
||||||
|
path: workflowPath
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for workflow-specific patterns
|
||||||
|
const hasWorkflowContent = content.length > 200;
|
||||||
|
if (!hasWorkflowContent) {
|
||||||
|
this.addValidationResult(`workflow_${workflowName}_content`, 'warn', {
|
||||||
|
message: 'Workflow content seems too short',
|
||||||
|
length: content.length,
|
||||||
|
path: workflowPath
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate skill references
|
||||||
|
const skillReferences = content.match(/@speckit-\w+/g) || [];
|
||||||
|
for (const skillRef of skillReferences) {
|
||||||
|
const skillName = skillRef.replace('@', '');
|
||||||
|
const skillPath = path.join(SKILLS_DIR, skillName);
|
||||||
|
|
||||||
|
if (!fs.existsSync(skillPath)) {
|
||||||
|
this.addValidationResult(`workflow_${workflowName}_skill_ref_${skillName}`, 'critical', {
|
||||||
|
message: `Workflow references non-existent skill: ${skillRef}`,
|
||||||
|
skill_reference: skillRef,
|
||||||
|
path: workflowPath
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
this.addValidationResult(`workflow_${workflowName}_structure`, 'pass', {
|
||||||
|
message: 'Workflow structure is valid',
|
||||||
|
skill_references: skillReferences,
|
||||||
|
path: workflowPath
|
||||||
|
});
|
||||||
|
return true;
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
this.addValidationResult(`workflow_${workflowName}_structure`, 'fail', {
|
||||||
|
message: `Error validating workflow: ${error.message}`,
|
||||||
|
path: workflowPath
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
validateCrossReferences() {
|
||||||
|
this.log('Validating cross-references...', 'info');
|
||||||
|
|
||||||
|
// Check README.md references
|
||||||
|
const readmePath = path.join(AGENTS_DIR, 'README.md');
|
||||||
|
if (fs.existsSync(readmePath)) {
|
||||||
|
const readmeContent = fs.readFileSync(readmePath, 'utf8');
|
||||||
|
|
||||||
|
// Check if README references correct workflow path
|
||||||
|
if (readmeContent.includes('.agents/workflows') && !readmeContent.includes('.windsurf/workflows')) {
|
||||||
|
this.addValidationResult('readme_workflow_reference', 'critical', {
|
||||||
|
message: 'README.md references .agents/workflows instead of .windsurf/workflows',
|
||||||
|
path: readmePath
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check version consistency in README
|
||||||
|
const versionMatches = readmeContent.match(/v?(\d+\.\d+\.\d+)/g) || [];
|
||||||
|
const uniqueVersions = [...new Set(versionMatches)];
|
||||||
|
|
||||||
|
if (uniqueVersions.length > 1) {
|
||||||
|
this.addValidationResult('readme_version_consistency', 'warn', {
|
||||||
|
message: 'Multiple versions found in README.md',
|
||||||
|
versions: uniqueVersions,
|
||||||
|
path: readmePath
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check skills.md references
|
||||||
|
const skillsMdPath = path.join(SKILLS_DIR, 'skills.md');
|
||||||
|
if (fs.existsSync(skillsMdPath)) {
|
||||||
|
const skillsContent = fs.readFileSync(skillsMdPath, 'utf8');
|
||||||
|
|
||||||
|
// Validate skill dependency matrix
|
||||||
|
if (skillsContent.includes('## Skill Dependency Matrix')) {
|
||||||
|
this.addValidationResult('skills_dependency_matrix', 'pass', {
|
||||||
|
message: 'Skills documentation includes dependency matrix',
|
||||||
|
path: skillsMdPath
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
this.addValidationResult('skills_dependency_matrix', 'warn', {
|
||||||
|
message: 'Skills documentation missing dependency matrix',
|
||||||
|
path: skillsMdPath
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
validateSecurityCompliance() {
|
||||||
|
this.log('Validating security compliance...', 'info');
|
||||||
|
|
||||||
|
// Check for security patterns in rules
|
||||||
|
const securityRulePath = path.join(AGENTS_DIR, 'rules', '02-security.md');
|
||||||
|
if (fs.existsSync(securityRulePath)) {
|
||||||
|
const securityContent = fs.readFileSync(securityRulePath, 'utf8');
|
||||||
|
|
||||||
|
const requiredSecurityTopics = [
|
||||||
|
'authentication',
|
||||||
|
'authorization',
|
||||||
|
'rbac',
|
||||||
|
'validation',
|
||||||
|
'audit'
|
||||||
|
];
|
||||||
|
|
||||||
|
const missingTopics = requiredSecurityTopics.filter(topic =>
|
||||||
|
!securityContent.toLowerCase().includes(topic.toLowerCase())
|
||||||
|
);
|
||||||
|
|
||||||
|
if (missingTopics.length > 0) {
|
||||||
|
this.addValidationResult('security_rules_completeness', 'warn', {
|
||||||
|
message: `Security rules missing topics: ${missingTopics.join(', ')}`,
|
||||||
|
missing_topics: missingTopics,
|
||||||
|
path: securityRulePath
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
this.addValidationResult('security_rules_completeness', 'pass', {
|
||||||
|
message: 'Security rules cover all required topics',
|
||||||
|
path: securityRulePath
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for ADR-019 compliance in rules
|
||||||
|
const uuidRulePath = path.join(AGENTS_DIR, 'rules', '01-adr-019-uuid.md');
|
||||||
|
if (fs.existsSync(uuidRulePath)) {
|
||||||
|
const uuidContent = fs.readFileSync(uuidRulePath, 'utf8');
|
||||||
|
|
||||||
|
const criticalUuidRules = [
|
||||||
|
'parseInt',
|
||||||
|
'Number(',
|
||||||
|
'publicId',
|
||||||
|
'@Exclude()'
|
||||||
|
];
|
||||||
|
|
||||||
|
const missingRules = criticalUuidRules.filter(rule =>
|
||||||
|
!uuidContent.includes(rule)
|
||||||
|
);
|
||||||
|
|
||||||
|
if (missingRules.length > 0) {
|
||||||
|
this.addValidationResult('uuid_rules_completeness', 'critical', {
|
||||||
|
message: `UUID rules missing critical patterns: ${missingRules.join(', ')}`,
|
||||||
|
missing_patterns: missingRules,
|
||||||
|
path: uuidRulePath
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
this.addValidationResult('uuid_rules_completeness', 'pass', {
|
||||||
|
message: 'UUID rules cover all critical patterns',
|
||||||
|
path: uuidRulePath
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
validatePerformanceMetrics() {
|
||||||
|
this.log('Validating performance metrics...', 'info');
|
||||||
|
|
||||||
|
// Check file sizes
|
||||||
|
const criticalFiles = [
|
||||||
|
{ path: path.join(AGENTS_DIR, 'README.md'), name: 'README.md' },
|
||||||
|
{ path: path.join(SKILLS_DIR, 'skills.md'), name: 'skills.md' },
|
||||||
|
{ path: path.join(AGENTS_DIR, 'skills', 'VERSION'), name: 'VERSION' }
|
||||||
|
];
|
||||||
|
|
||||||
|
for (const file of criticalFiles) {
|
||||||
|
if (fs.existsSync(file.path)) {
|
||||||
|
const stats = fs.statSync(file.path);
|
||||||
|
const sizeKB = stats.size / 1024;
|
||||||
|
|
||||||
|
if (sizeKB > 100) {
|
||||||
|
this.addValidationResult(`file_size_${file.name}`, 'warn', {
|
||||||
|
message: `File ${file.name} is large (${sizeKB.toFixed(1)}KB)`,
|
||||||
|
size_kb: sizeKB,
|
||||||
|
path: file.path
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
this.addValidationResult(`file_size_${file.name}`, 'pass', {
|
||||||
|
message: `File ${file.name} size is acceptable`,
|
||||||
|
size_kb: sizeKB,
|
||||||
|
path: file.path
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check directory structure depth
|
||||||
|
function getDirectoryDepth(dirPath, currentDepth = 0) {
|
||||||
|
let maxDepth = currentDepth;
|
||||||
|
|
||||||
|
if (fs.existsSync(dirPath)) {
|
||||||
|
const items = fs.readdirSync(dirPath);
|
||||||
|
for (const item of items) {
|
||||||
|
const itemPath = path.join(dirPath, item);
|
||||||
|
if (fs.statSync(itemPath).isDirectory()) {
|
||||||
|
const depth = getDirectoryDepth(itemPath, currentDepth + 1);
|
||||||
|
maxDepth = Math.max(maxDepth, depth);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return maxDepth;
|
||||||
|
}
|
||||||
|
|
||||||
|
const agentsDepth = getDirectoryDepth(AGENTS_DIR);
|
||||||
|
if (agentsDepth > 5) {
|
||||||
|
this.addValidationResult('directory_depth', 'warn', {
|
||||||
|
message: `.agents directory structure is deep (${agentsDepth} levels)`,
|
||||||
|
depth: agentsDepth,
|
||||||
|
path: AGENTS_DIR
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
this.addValidationResult('directory_depth', 'pass', {
|
||||||
|
message: `.agents directory structure depth is acceptable`,
|
||||||
|
depth: agentsDepth,
|
||||||
|
path: AGENTS_DIR
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
addValidationResult(name, status, details) {
|
||||||
|
this.validationResults.validations[name] = {
|
||||||
|
status,
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
...details
|
||||||
|
};
|
||||||
|
|
||||||
|
this.validationResults.summary.total_validations++;
|
||||||
|
|
||||||
|
switch (status) {
|
||||||
|
case 'pass':
|
||||||
|
this.validationResults.summary.passed_validations++;
|
||||||
|
this.log(`${name}: PASS - ${details.message}`, 'pass');
|
||||||
|
break;
|
||||||
|
case 'fail':
|
||||||
|
this.validationResults.summary.failed_validations++;
|
||||||
|
this.log(`${name}: FAIL - ${details.message}`, 'fail');
|
||||||
|
break;
|
||||||
|
case 'warn':
|
||||||
|
this.validationResults.summary.warnings++;
|
||||||
|
this.log(`${name}: WARN - ${details.message}`, 'warn');
|
||||||
|
break;
|
||||||
|
case 'critical':
|
||||||
|
this.validationResults.summary.critical_issues++;
|
||||||
|
this.criticalIssues.push({ name, ...details });
|
||||||
|
this.log(`${name}: CRITICAL - ${details.message}`, 'critical');
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async runAdvancedValidation() {
|
||||||
|
this.log('Starting advanced validation...', 'info');
|
||||||
|
this.log(`Base directory: ${BASE_DIR}`, 'info');
|
||||||
|
|
||||||
|
// Validate all skills
|
||||||
|
this.log('Validating skills...', 'info');
|
||||||
|
if (fs.existsSync(SKILLS_DIR)) {
|
||||||
|
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
|
||||||
|
const itemPath = path.join(SKILLS_DIR, item);
|
||||||
|
return fs.statSync(itemPath).isDirectory();
|
||||||
|
});
|
||||||
|
|
||||||
|
for (const skillDir of skillDirs) {
|
||||||
|
const skillPath = path.join(SKILLS_DIR, skillDir);
|
||||||
|
this.validateSkillFrontMatter(skillPath, skillDir);
|
||||||
|
this.validateSkillContent(skillPath, skillDir);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate all workflows
|
||||||
|
this.log('Validating workflows...', 'info');
|
||||||
|
if (fs.existsSync(WORKFLOWS_DIR)) {
|
||||||
|
const workflowFiles = fs.readdirSync(WORKFLOWS_DIR).filter(file => file.endsWith('.md'));
|
||||||
|
|
||||||
|
for (const workflowFile of workflowFiles) {
|
||||||
|
const workflowPath = path.join(WORKFLOWS_DIR, workflowFile);
|
||||||
|
const workflowName = workflowFile.replace('.md', '');
|
||||||
|
this.validateWorkflowStructure(workflowPath, workflowName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cross-reference validation
|
||||||
|
this.validateCrossReferences();
|
||||||
|
|
||||||
|
// Security compliance validation
|
||||||
|
this.validateSecurityCompliance();
|
||||||
|
|
||||||
|
// Performance metrics validation
|
||||||
|
this.validatePerformanceMetrics();
|
||||||
|
|
||||||
|
// Generate summary
|
||||||
|
this.generateSummary();
|
||||||
|
|
||||||
|
return this.validationResults;
|
||||||
|
}
|
||||||
|
|
||||||
|
generateSummary() {
|
||||||
|
const { summary, critical_issues } = this.validationResults;
|
||||||
|
|
||||||
|
this.log('=== Advanced Validation Summary ===', 'info');
|
||||||
|
this.log(`Total validations: ${summary.total_validations}`, 'info');
|
||||||
|
this.log(`Passed: ${summary.passed_validations}`, 'pass');
|
||||||
|
this.log(`Failed: ${summary.failed_validations}`, summary.failed_validations > 0 ? 'fail' : 'info');
|
||||||
|
this.log(`Warnings: ${summary.warnings}`, 'warn');
|
||||||
|
this.log(`Critical issues: ${summary.critical_issues}`, 'critical');
|
||||||
|
|
||||||
|
if (critical_issues.length > 0) {
|
||||||
|
this.log('Critical Issues:', 'critical');
|
||||||
|
critical_issues.forEach(issue => {
|
||||||
|
this.log(` - ${issue.name}: ${issue.message}`, 'critical');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save validation results
|
||||||
|
const validationReportPath = path.join(AGENTS_DIR, 'reports', 'advanced-validation.json');
|
||||||
|
const reportsDir = path.dirname(validationReportPath);
|
||||||
|
|
||||||
|
if (!fs.existsSync(reportsDir)) {
|
||||||
|
fs.mkdirSync(reportsDir, { recursive: true });
|
||||||
|
}
|
||||||
|
|
||||||
|
fs.writeFileSync(validationReportPath, JSON.stringify(this.validationResults, null, 2));
|
||||||
|
this.log(`Advanced validation report saved to: ${validationReportPath}`, 'info');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CLI interface
|
||||||
|
async function main() {
|
||||||
|
const validator = new AdvancedValidator();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const results = await validator.runAdvancedValidation();
|
||||||
|
process.exit(results.summary.critical_issues > 0 ? 1 : 0);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Advanced validation failed:', error);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Export for use in other modules
|
||||||
|
module.exports = { AdvancedValidator };
|
||||||
|
|
||||||
|
// Run if called directly
|
||||||
|
if (require.main === module) {
|
||||||
|
main();
|
||||||
|
}
|
||||||
@@ -0,0 +1,195 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# audit-skills.sh - Verify skill completeness and health
|
||||||
|
# Part of LCBP3-DMS Phase 2 improvements
|
||||||
|
|
||||||
|
set -uo pipefail
|
||||||
|
# Note: no -e — we let per-skill checks accumulate issues without terminating
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Base directory
|
||||||
|
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../.." && pwd)"
|
||||||
|
AGENTS_DIR="$BASE_DIR/.agents"
|
||||||
|
SKILLS_DIR="$AGENTS_DIR/skills"
|
||||||
|
|
||||||
|
echo "=== Skills Health Audit ==="
|
||||||
|
echo "Base directory: $BASE_DIR"
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Function to check if skill has required files
|
||||||
|
check_skill_health() {
|
||||||
|
local skill_dir="$1"
|
||||||
|
local skill_name="$(basename "$skill_dir")"
|
||||||
|
|
||||||
|
local issues=0
|
||||||
|
|
||||||
|
# Check for SKILL.md
|
||||||
|
if [[ -f "$skill_dir/SKILL.md" ]]; then
|
||||||
|
echo -e "${GREEN} OK${NC}: $skill_name/SKILL.md"
|
||||||
|
else
|
||||||
|
echo -e "${RED} MISSING${NC}: $skill_name/SKILL.md"
|
||||||
|
((issues++))
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for templates directory (optional)
|
||||||
|
if [[ -d "$skill_dir/templates" ]]; then
|
||||||
|
template_count=$(find "$skill_dir/templates" -name "*.md" -type f | wc -l)
|
||||||
|
if [[ $template_count -gt 0 ]]; then
|
||||||
|
echo -e "${GREEN} OK${NC}: $skill_name/templates ($template_count files)"
|
||||||
|
else
|
||||||
|
echo -e "${YELLOW} EMPTY${NC}: $skill_name/templates (no files)"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check SKILL.md content if exists
|
||||||
|
local skill_file="$skill_dir/SKILL.md"
|
||||||
|
if [[ -f "$skill_file" ]]; then
|
||||||
|
# Check for required front matter fields
|
||||||
|
local required_fields=("name" "description" "version")
|
||||||
|
for field in "${required_fields[@]}"; do
|
||||||
|
if grep -q "^$field:" "$skill_file"; then
|
||||||
|
echo -e " ${GREEN} FIELD${NC}: $field"
|
||||||
|
else
|
||||||
|
echo -e " ${RED} MISSING FIELD${NC}: $field"
|
||||||
|
((issues++)) || true
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# Check for LCBP3 context reference (speckit-* skills only)
|
||||||
|
if [[ "$skill_name" == speckit-* ]]; then
|
||||||
|
if grep -q '_LCBP3-CONTEXT\.md' "$skill_file"; then
|
||||||
|
echo -e " ${GREEN} CONTEXT${NC}: LCBP3 appendix referenced"
|
||||||
|
else
|
||||||
|
echo -e " ${YELLOW} MISSING${NC}: LCBP3 context reference"
|
||||||
|
((issues++)) || true
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
return $issues
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to get skill version from SKILL.md
|
||||||
|
get_skill_version() {
|
||||||
|
local skill_file="$1"
|
||||||
|
if [[ -f "$skill_file" ]]; then
|
||||||
|
# Match 'version: X.Y.Z' (or quoted) at a LINE START only; ignore nested ` version:` fields.
|
||||||
|
# Output: bare X.Y.Z with no quotes/whitespace.
|
||||||
|
local raw
|
||||||
|
raw=$(grep -E "^version:[[:space:]]*['\"]?[0-9]+\.[0-9]+\.[0-9]+" "$skill_file" | head -1 || true)
|
||||||
|
if [[ -n "$raw" ]]; then
|
||||||
|
printf '%s' "$raw" | sed -E "s/^version:[[:space:]]*['\"]?([0-9]+\.[0-9]+\.[0-9]+).*/\1/"
|
||||||
|
else
|
||||||
|
echo "unknown"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "no_file"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check skills directory
|
||||||
|
if [[ ! -d "$SKILLS_DIR" ]]; then
|
||||||
|
echo -e "${RED}ERROR: Skills directory not found${NC}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Scanning skills directory: $SKILLS_DIR"
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Get all skill directories
|
||||||
|
SKILL_DIRS=()
|
||||||
|
while IFS= read -r -d '' dir; do
|
||||||
|
SKILL_DIRS+=("$dir")
|
||||||
|
done < <(find "$SKILLS_DIR" -maxdepth 1 -type d -not -path "$SKILLS_DIR" -print0 | sort -z)
|
||||||
|
|
||||||
|
echo "Found ${#SKILL_DIRS[@]} skill directories"
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Audit each skill
|
||||||
|
TOTAL_ISSUES=0
|
||||||
|
SKILL_SUMMARY=()
|
||||||
|
|
||||||
|
for skill_dir in "${SKILL_DIRS[@]}"; do
|
||||||
|
skill_name="$(basename "$skill_dir")"
|
||||||
|
# Skip non-skill entries (e.g. _LCBP3-CONTEXT.md would not match here; safe)
|
||||||
|
[[ "$skill_name" == _* ]] && continue
|
||||||
|
echo "Auditing: $skill_name"
|
||||||
|
echo "------------------------"
|
||||||
|
|
||||||
|
set +e
|
||||||
|
check_skill_health "$skill_dir"
|
||||||
|
issues=$?
|
||||||
|
set -u
|
||||||
|
|
||||||
|
skill_version=$(get_skill_version "$skill_dir/SKILL.md")
|
||||||
|
SKILL_SUMMARY+=("$skill_name:$issues:$skill_version")
|
||||||
|
|
||||||
|
TOTAL_ISSUES=$((TOTAL_ISSUES + issues))
|
||||||
|
echo
|
||||||
|
done
|
||||||
|
|
||||||
|
# Summary report
|
||||||
|
echo "=== Skills Audit Summary ==="
|
||||||
|
echo
|
||||||
|
|
||||||
|
echo "Skill Status:"
|
||||||
|
echo "-----------"
|
||||||
|
for summary in "${SKILL_SUMMARY[@]}"; do
|
||||||
|
IFS=':' read -r name issues version <<< "$summary"
|
||||||
|
if [[ $issues -eq 0 ]]; then
|
||||||
|
echo -e "${GREEN} HEALTHY${NC}: $name (v$version)"
|
||||||
|
else
|
||||||
|
echo -e "${RED} ISSUES${NC}: $name (v$version) - $issues issues"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Check skills.md version consistency
|
||||||
|
SKILLS_VERSION_FILE="$SKILLS_DIR/VERSION"
|
||||||
|
if [[ -f "$SKILLS_VERSION_FILE" ]]; then
|
||||||
|
global_version=$(grep "^version:" "$SKILLS_VERSION_FILE" | sed 's/version: *//' | tr -d '\r\n ')
|
||||||
|
echo "Global skills version: v$global_version"
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Check for version mismatches
|
||||||
|
echo "Version Consistency Check:"
|
||||||
|
echo "------------------------"
|
||||||
|
VERSION_MISMATCHES=0
|
||||||
|
|
||||||
|
for summary in "${SKILL_SUMMARY[@]}"; do
|
||||||
|
IFS=':' read -r name issues version <<< "$summary"
|
||||||
|
if [[ "$version" != "unknown" && "$version" != "no_file" && "$version" != "$global_version" ]]; then
|
||||||
|
echo -e "${YELLOW} MISMATCH${NC}: $name is v$version, global is v$global_version"
|
||||||
|
((VERSION_MISMATCHES++))
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ $VERSION_MISMATCHES -eq 0 ]]; then
|
||||||
|
echo -e "${GREEN} All skills match global version${NC}"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Overall health
|
||||||
|
if [[ $TOTAL_ISSUES -eq 0 ]]; then
|
||||||
|
echo -e "${GREEN}=== SUCCESS: All skills healthy ===${NC}"
|
||||||
|
echo "Total skills: ${#SKILL_DIRS[@]}"
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
echo -e "${RED}=== ISSUES FOUND: $TOTAL_ISSUES total issues ===${NC}"
|
||||||
|
echo
|
||||||
|
echo "Recommendations:"
|
||||||
|
echo "1. Fix missing SKILL.md files"
|
||||||
|
echo "2. Add required front matter fields"
|
||||||
|
echo "3. Ensure Role and Task sections exist"
|
||||||
|
echo "4. Align skill versions with global version"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
@@ -0,0 +1,149 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# sync-workflows.sh - Sync workflow references between .agents and .windsurf
|
||||||
|
# Part of LCBP3-DMS Phase 2 improvements
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Base directory
|
||||||
|
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
|
||||||
|
AGENTS_DIR="$BASE_DIR/.agents"
|
||||||
|
WINDSURF_DIR="$BASE_DIR/.windsurf"
|
||||||
|
WORKFLOWS_DIR="$WINDSURF_DIR/workflows"
|
||||||
|
|
||||||
|
echo "=== Workflow Synchronization Check ==="
|
||||||
|
echo "Base directory: $BASE_DIR"
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Function to check if workflow exists
|
||||||
|
check_workflow() {
|
||||||
|
local workflow_name="$1"
|
||||||
|
local workflow_file="$WORKFLOWS_DIR/$workflow_name"
|
||||||
|
|
||||||
|
if [[ -f "$workflow_file" ]]; then
|
||||||
|
echo -e "${GREEN} EXISTS${NC}: $workflow_name"
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
echo -e "${RED} MISSING${NC}: $workflow_name"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to list all workflows
|
||||||
|
list_workflows() {
|
||||||
|
if [[ -d "$WORKFLOWS_DIR" ]]; then
|
||||||
|
find "$WORKFLOWS_DIR" -name "*.md" -type f | sort
|
||||||
|
else
|
||||||
|
echo "No workflows directory found"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check directories
|
||||||
|
echo "Checking directory structure..."
|
||||||
|
if [[ -d "$AGENTS_DIR" ]]; then
|
||||||
|
echo -e "${GREEN} OK${NC}: .agents directory exists"
|
||||||
|
else
|
||||||
|
echo -e "${RED} ERROR${NC}: .agents directory not found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -d "$WINDSURF_DIR" ]]; then
|
||||||
|
echo -e "${GREEN} OK${NC}: .windsurf directory exists"
|
||||||
|
else
|
||||||
|
echo -e "${RED} ERROR${NC}: .windsurf directory not found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -d "$WORKFLOWS_DIR" ]]; then
|
||||||
|
echo -e "${GREEN} OK${NC}: workflows directory exists"
|
||||||
|
else
|
||||||
|
echo -e "${RED} ERROR${NC}: workflows directory not found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Expected workflows based on README documentation
|
||||||
|
echo "Checking expected workflows..."
|
||||||
|
EXPECTED_WORKFLOWS=(
|
||||||
|
"00-speckit.all.md"
|
||||||
|
"01-speckit.constitution.md"
|
||||||
|
"02-speckit.specify.md"
|
||||||
|
"03-speckit.clarify.md"
|
||||||
|
"04-speckit.plan.md"
|
||||||
|
"05-speckit.tasks.md"
|
||||||
|
"06-speckit.analyze.md"
|
||||||
|
"07-speckit.implement.md"
|
||||||
|
"08-speckit.checker.md"
|
||||||
|
"09-speckit.tester.md"
|
||||||
|
"10-speckit.reviewer.md"
|
||||||
|
"11-speckit.validate.md"
|
||||||
|
"speckit.prepare.md"
|
||||||
|
"schema-change.md"
|
||||||
|
"create-backend-module.md"
|
||||||
|
"create-frontend-page.md"
|
||||||
|
"deploy.md"
|
||||||
|
"review.md"
|
||||||
|
"util-speckit.checklist.md"
|
||||||
|
"util-speckit.diff.md"
|
||||||
|
"util-speckit.migrate.md"
|
||||||
|
"util-speckit.quizme.md"
|
||||||
|
"util-speckit.status.md"
|
||||||
|
"util-speckit.taskstoissues.md"
|
||||||
|
)
|
||||||
|
|
||||||
|
MISSING_WORKFLOWS=0
|
||||||
|
|
||||||
|
for workflow in "${EXPECTED_WORKFLOWS[@]}"; do
|
||||||
|
if ! check_workflow "$workflow"; then
|
||||||
|
((MISSING_WORKFLOWS++))
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
echo
|
||||||
|
|
||||||
|
# List all actual workflows
|
||||||
|
echo "All workflows in $WORKFLOWS_DIR:"
|
||||||
|
echo "--------------------------------"
|
||||||
|
while IFS= read -r workflow; do
|
||||||
|
echo " $(basename "$workflow")"
|
||||||
|
done < <(list_workflows)
|
||||||
|
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Check for orphaned workflows (unexpected ones)
|
||||||
|
echo "Checking for unexpected workflows..."
|
||||||
|
ACTUAL_WORKFLOWS=()
|
||||||
|
while IFS= read -r workflow; do
|
||||||
|
ACTUAL_WORKFLOWS+=("$(basename "$workflow")")
|
||||||
|
done < <(list_workflows)
|
||||||
|
|
||||||
|
for actual_workflow in "${ACTUAL_WORKFLOWS[@]}"; do
|
||||||
|
if [[ ! " ${EXPECTED_WORKFLOWS[*]} " =~ " ${actual_workflow} " ]]; then
|
||||||
|
echo -e "${YELLOW} UNEXPECTED${NC}: $actual_workflow"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
if [[ $MISSING_WORKFLOWS -eq 0 ]]; then
|
||||||
|
echo -e "${GREEN}=== SUCCESS: All expected workflows present ===${NC}"
|
||||||
|
echo "Total workflows: ${#ACTUAL_WORKFLOWS[@]}"
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
echo -e "${RED}=== FAILED: $MISSING_WORKFLOWS workflows missing ===${NC}"
|
||||||
|
echo
|
||||||
|
echo "To fix missing workflows:"
|
||||||
|
echo "1. Create missing workflow files in $WORKFLOWS_DIR"
|
||||||
|
echo "2. Use existing workflows as templates"
|
||||||
|
echo "3. Run this script again to verify"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
@@ -0,0 +1,106 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# validate-versions.sh - Check version consistency across .agents files
|
||||||
|
# Part of LCBP3-DMS Phase 2 improvements
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Base directory
|
||||||
|
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../.." && pwd)"
|
||||||
|
AGENTS_DIR="$BASE_DIR/.agents"
|
||||||
|
|
||||||
|
# Expected version (should match LCBP3 version)
|
||||||
|
EXPECTED_VERSION="1.8.9"
|
||||||
|
|
||||||
|
echo "=== .agents Version Validation ==="
|
||||||
|
echo "Base directory: $BASE_DIR"
|
||||||
|
echo "Expected version: $EXPECTED_VERSION"
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Function to extract version from file
|
||||||
|
extract_version() {
|
||||||
|
local file="$1"
|
||||||
|
local pattern="$2"
|
||||||
|
|
||||||
|
if [[ -f "$file" ]]; then
|
||||||
|
grep -o "$pattern" "$file" | head -1 | sed 's/.*\([0-9]\+\.[0-9]\+\.[0-9]\+\).*/\1/' || echo "NOT_FOUND"
|
||||||
|
else
|
||||||
|
echo "FILE_NOT_FOUND"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Files to check
|
||||||
|
declare -A FILES_TO_CHECK=(
|
||||||
|
["$AGENTS_DIR/skills/VERSION"]="version: \([0-9]\+\.[0-9]\+\.[0-9]\+\)"
|
||||||
|
["$AGENTS_DIR/skills/skills.md"]="[Vv]\([0-9]\+\.[0-9]\+\.[0-9]\+\)"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Track issues
|
||||||
|
ISSUES=0
|
||||||
|
|
||||||
|
echo "Checking version consistency..."
|
||||||
|
echo
|
||||||
|
|
||||||
|
for file in "${!FILES_TO_CHECK[@]}"; do
|
||||||
|
pattern="${FILES_TO_CHECK[$file]}"
|
||||||
|
relative_path="${file#$BASE_DIR/}"
|
||||||
|
|
||||||
|
version=$(extract_version "$file" "$pattern")
|
||||||
|
|
||||||
|
if [[ "$version" == "NOT_FOUND" ]] || [[ "$version" == "FILE_NOT_FOUND" ]]; then
|
||||||
|
echo -e "${RED} ERROR${NC}: $relative_path - Version not found"
|
||||||
|
((ISSUES++))
|
||||||
|
elif [[ "$version" != "$EXPECTED_VERSION" ]]; then
|
||||||
|
echo -e "${RED} ERROR${NC}: $relative_path - Found v$version, expected v$EXPECTED_VERSION"
|
||||||
|
((ISSUES++))
|
||||||
|
else
|
||||||
|
echo -e "${GREEN} OK${NC}: $relative_path - v$version"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Check for version mismatches in skill files
|
||||||
|
echo "Checking skill file versions..."
|
||||||
|
SKILL_VERSIONS_FILE="$AGENTS_DIR/skills/VERSION"
|
||||||
|
if [[ -f "$SKILL_VERSIONS_FILE" ]]; then
|
||||||
|
skills_version=$(extract_version "$SKILL_VERSIONS_FILE" "version: \([0-9]\+\.[0-9]\+\.[0-9]\+\)")
|
||||||
|
echo "Skills version file: v$skills_version"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check workflow versions (in .windsurf/workflows)
|
||||||
|
WORKFLOWS_DIR="$BASE_DIR/.windsurf/workflows"
|
||||||
|
if [[ -d "$WORKFLOWS_DIR" ]]; then
|
||||||
|
echo "Checking workflow files..."
|
||||||
|
workflow_count=0
|
||||||
|
for workflow in "$WORKFLOWS_DIR"/*.md; do
|
||||||
|
if [[ -f "$workflow" ]]; then
|
||||||
|
workflow_count=$((workflow_count + 1))
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
echo -e "${GREEN} OK${NC}: Found $workflow_count workflow files"
|
||||||
|
else
|
||||||
|
echo -e "${YELLOW} WARNING${NC}: Workflows directory not found at $WORKFLOWS_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
if [[ $ISSUES -eq 0 ]]; then
|
||||||
|
echo -e "${GREEN}=== SUCCESS: All versions consistent ===${NC}"
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
echo -e "${RED}=== FAILED: $ISSUES version issues found ===${NC}"
|
||||||
|
echo
|
||||||
|
echo "To fix version issues:"
|
||||||
|
echo "1. Update files to use v$EXPECTED_VERSION"
|
||||||
|
echo "2. Ensure LCBP3 project version matches"
|
||||||
|
echo "3. Run this script again to verify"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
@@ -0,0 +1,516 @@
|
|||||||
|
# ci-hooks.ps1 - Continuous integration hooks for .agents (PowerShell version)
|
||||||
|
# Part of LCBP3-DMS Phase 3 enhancements
|
||||||
|
|
||||||
|
param(
|
||||||
|
[Parameter(Mandatory=$false)]
|
||||||
|
[ValidateSet("pre-commit", "pre-push", "ci-pipeline", "install-hooks", "help")]
|
||||||
|
[string]$Command = "help"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
$BaseDir = Split-Path -Parent (Split-Path -Parent $PSScriptRoot)
|
||||||
|
$AgentsDir = Join-Path $BaseDir ".agents"
|
||||||
|
$CILogDir = Join-Path $AgentsDir "logs\ci"
|
||||||
|
$CIReportDir = Join-Path $AgentsDir "reports\ci"
|
||||||
|
|
||||||
|
# Ensure directories exist
|
||||||
|
if (-not (Test-Path $CILogDir)) { New-Item -ItemType Directory -Path $CILogDir -Force | Out-Null }
|
||||||
|
if (-not (Test-Path $CIReportDir)) { New-Item -ItemType Directory -Path $CIReportDir -Force | Out-Null }
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
$Colors = @{
|
||||||
|
Red = "`e[0;31m"
|
||||||
|
Green = "`e[0;32m"
|
||||||
|
Yellow = "`e[1;33m"
|
||||||
|
Blue = "`e[0;34m"
|
||||||
|
NoColor = "`e[0m"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Logging function
|
||||||
|
function Write-CILog {
|
||||||
|
param(
|
||||||
|
[string]$Level,
|
||||||
|
[string]$Message
|
||||||
|
)
|
||||||
|
|
||||||
|
$timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
|
||||||
|
$logFile = Join-Path $CILogDir "ci-$(Get-Date -Format 'yyyy-MM-dd').log"
|
||||||
|
"$timestamp [$Level] $Message" | Out-File -FilePath $logFile -Append
|
||||||
|
|
||||||
|
# Console output with colors
|
||||||
|
switch ($Level) {
|
||||||
|
"INFO" { Write-Host $Message -ForegroundColor $Colors.Blue }
|
||||||
|
"PASS" { Write-Host $Message -ForegroundColor $Colors.Green }
|
||||||
|
"WARN" { Write-Host $Message -ForegroundColor $Colors.Yellow }
|
||||||
|
"FAIL" { Write-Host $Message -ForegroundColor $Colors.Red }
|
||||||
|
default { Write-Host $Message }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Pre-commit hook
|
||||||
|
function Invoke-PreCommitHook {
|
||||||
|
Write-CILog "INFO" "Running pre-commit validation..."
|
||||||
|
|
||||||
|
$exitCode = 0
|
||||||
|
|
||||||
|
# 1. Run version validation
|
||||||
|
Write-CILog "INFO" "Checking version consistency..."
|
||||||
|
$versionScript = Join-Path $AgentsDir "scripts\powershell\validate-versions.ps1"
|
||||||
|
if (Test-Path $versionScript) {
|
||||||
|
try {
|
||||||
|
& $versionScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-versions.log") -Append
|
||||||
|
Write-CILog "PASS" "Version validation passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "FAIL" "Version validation failed"
|
||||||
|
$exitCode = 1
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Write-CILog "WARN" "Version validation script not found"
|
||||||
|
}
|
||||||
|
|
||||||
|
# 2. Run skill audit
|
||||||
|
Write-CILog "INFO" "Auditing skills..."
|
||||||
|
$auditScript = Join-Path $AgentsDir "scripts\powershell\audit-skills.ps1"
|
||||||
|
if (Test-Path $auditScript) {
|
||||||
|
try {
|
||||||
|
& $auditScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-skills.log") -Append
|
||||||
|
Write-CILog "PASS" "Skill audit passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "FAIL" "Skill audit failed"
|
||||||
|
$exitCode = 1
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Write-CILog "WARN" "Skill audit script not found"
|
||||||
|
}
|
||||||
|
|
||||||
|
# 3. Run integration tests (if Node.js available)
|
||||||
|
if (Get-Command node -ErrorAction SilentlyContinue) {
|
||||||
|
Write-CILog "INFO" "Running integration tests..."
|
||||||
|
$testScript = Join-Path $AgentsDir "tests\skill-integration.test.js"
|
||||||
|
if (Test-Path $testScript) {
|
||||||
|
try {
|
||||||
|
node $testScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-tests.log") -Append
|
||||||
|
Write-CILog "PASS" "Integration tests passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "WARN" "Integration tests failed (non-blocking)"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Write-CILog "WARN" "Integration test script not found"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Write-CILog "WARN" "Node.js not available, skipping integration tests"
|
||||||
|
}
|
||||||
|
|
||||||
|
# 4. Check for forbidden patterns
|
||||||
|
Write-CILog "INFO" "Checking for forbidden patterns..."
|
||||||
|
$forbiddenPatterns = @("TODO", "FIXME", "XXX", "HACK")
|
||||||
|
$foundForbidden = $false
|
||||||
|
|
||||||
|
foreach ($pattern in $forbiddenPatterns) {
|
||||||
|
$skillsDir = Join-Path $AgentsDir "skills"
|
||||||
|
if (Test-Path $skillsDir) {
|
||||||
|
$matches = Select-String -Path $skillsDir\*.md -Pattern $pattern -Recurse
|
||||||
|
if ($matches) {
|
||||||
|
Write-CILog "WARN" "Found forbidden pattern: $pattern"
|
||||||
|
$foundForbidden = $true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (-not $foundForbidden) {
|
||||||
|
Write-CILog "PASS" "No forbidden patterns found"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate pre-commit report
|
||||||
|
$reportFile = Join-Path $CIReportDir "pre-commit-$(Get-Date -Format 'yyyyMMdd-HHmmss').json"
|
||||||
|
$report = @{
|
||||||
|
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
|
||||||
|
hook_type = "pre-commit"
|
||||||
|
exit_code = $exitCode
|
||||||
|
checks_performed = @(
|
||||||
|
"version_validation",
|
||||||
|
"skill_audit",
|
||||||
|
"integration_tests",
|
||||||
|
"forbidden_patterns"
|
||||||
|
)
|
||||||
|
log_files = @(
|
||||||
|
"pre-commit-versions.log",
|
||||||
|
"pre-commit-skills.log",
|
||||||
|
"pre-commit-tests.log"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
|
||||||
|
|
||||||
|
Write-CILog "INFO" "Pre-commit report saved to: $reportFile"
|
||||||
|
|
||||||
|
if ($exitCode -eq 0) {
|
||||||
|
Write-CILog "PASS" "Pre-commit validation completed successfully"
|
||||||
|
} else {
|
||||||
|
Write-CILog "FAIL" "Pre-commit validation failed"
|
||||||
|
}
|
||||||
|
|
||||||
|
return $exitCode
|
||||||
|
}
|
||||||
|
|
||||||
|
# Pre-push hook
|
||||||
|
function Invoke-PrePushHook {
|
||||||
|
Write-CILog "INFO" "Running pre-push validation..."
|
||||||
|
|
||||||
|
$exitCode = 0
|
||||||
|
|
||||||
|
# 1. Full health check
|
||||||
|
Write-CILog "INFO" "Running full health check..."
|
||||||
|
if (Get-Command node -ErrorAction SilentlyContinue) {
|
||||||
|
$healthScript = Join-Path $AgentsDir "scripts\health-monitor.js"
|
||||||
|
if (Test-Path $healthScript) {
|
||||||
|
try {
|
||||||
|
node $healthScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-health.log") -Append
|
||||||
|
Write-CILog "PASS" "Health check passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "FAIL" "Health check failed"
|
||||||
|
$exitCode = 1
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Write-CILog "WARN" "Health monitor script not found"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Write-CILog "WARN" "Node.js not available, using basic health check"
|
||||||
|
$auditScript = Join-Path $AgentsDir "scripts\powershell\audit-skills.ps1"
|
||||||
|
if (Test-Path $auditScript) {
|
||||||
|
try {
|
||||||
|
& $auditScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-basic.log") -Append
|
||||||
|
Write-CILog "PASS" "Basic health check passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "FAIL" "Basic health check failed"
|
||||||
|
$exitCode = 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# 2. Advanced validation (if available)
|
||||||
|
if (Get-Command node -ErrorAction SilentlyContinue) {
|
||||||
|
$advancedScript = Join-Path $AgentsDir "scripts\advanced-validator.js"
|
||||||
|
if (Test-Path $advancedScript) {
|
||||||
|
Write-CILog "INFO" "Running advanced validation..."
|
||||||
|
try {
|
||||||
|
node $advancedScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-advanced.log") -Append
|
||||||
|
Write-CILog "PASS" "Advanced validation passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "WARN" "Advanced validation found issues (non-blocking)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# 3. Dependency validation
|
||||||
|
if (Get-Command node -ErrorAction SilentlyContinue) {
|
||||||
|
$dependencyScript = Join-Path $AgentsDir "scripts\dependency-validator.js"
|
||||||
|
if (Test-Path $dependencyScript) {
|
||||||
|
Write-CILog "INFO" "Running dependency validation..."
|
||||||
|
try {
|
||||||
|
node $dependencyScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-dependencies.log") -Append
|
||||||
|
Write-CILog "PASS" "Dependency validation passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "WARN" "Dependency validation found issues (non-blocking)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# 4. Performance monitoring
|
||||||
|
if (Get-Command node -ErrorAction SilentlyContinue) {
|
||||||
|
$performanceScript = Join-Path $AgentsDir "scripts\performance-monitor.js"
|
||||||
|
if (Test-Path $performanceScript) {
|
||||||
|
Write-CILog "INFO" "Running performance monitoring..."
|
||||||
|
try {
|
||||||
|
node $performanceScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-performance.log") -Append
|
||||||
|
Write-CILog "PASS" "Performance monitoring passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "WARN" "Performance monitoring found issues (non-blocking)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate pre-push report
|
||||||
|
$reportFile = Join-Path $CIReportDir "pre-push-$(Get-Date -Format 'yyyyMMdd-HHmmss').json"
|
||||||
|
$report = @{
|
||||||
|
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
|
||||||
|
hook_type = "pre-push"
|
||||||
|
exit_code = $exitCode
|
||||||
|
checks_performed = @(
|
||||||
|
"health_check",
|
||||||
|
"advanced_validation",
|
||||||
|
"dependency_validation",
|
||||||
|
"performance_monitoring"
|
||||||
|
)
|
||||||
|
log_files = @(
|
||||||
|
"pre-push-health.log",
|
||||||
|
"pre-push-advanced.log",
|
||||||
|
"pre-push-dependencies.log",
|
||||||
|
"pre-push-performance.log"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
|
||||||
|
|
||||||
|
Write-CILog "INFO" "Pre-push report saved to: $reportFile"
|
||||||
|
|
||||||
|
if ($exitCode -eq 0) {
|
||||||
|
Write-CILog "PASS" "Pre-push validation completed successfully"
|
||||||
|
} else {
|
||||||
|
Write-CILog "FAIL" "Pre-push validation failed"
|
||||||
|
}
|
||||||
|
|
||||||
|
return $exitCode
|
||||||
|
}
|
||||||
|
|
||||||
|
# CI pipeline hook
|
||||||
|
function Invoke-CIPipelineHook {
|
||||||
|
Write-CILog "INFO" "Running CI pipeline validation..."
|
||||||
|
|
||||||
|
$exitCode = 0
|
||||||
|
$pipelineStart = Get-Date
|
||||||
|
|
||||||
|
# Create pipeline workspace
|
||||||
|
$workspace = Join-Path $CIReportDir "pipeline-$(Get-Date -Format 'yyyyMMdd-HHmmss')"
|
||||||
|
New-Item -ItemType Directory -Path $workspace -Force | Out-Null
|
||||||
|
|
||||||
|
# 1. Environment validation
|
||||||
|
Write-CILog "INFO" "Validating CI environment..."
|
||||||
|
|
||||||
|
# Check required tools
|
||||||
|
$requiredTools = @("node", "npm")
|
||||||
|
foreach ($tool in $requiredTools) {
|
||||||
|
if (Get-Command $tool -ErrorAction SilentlyContinue) {
|
||||||
|
Write-CILog "PASS" "Tool available: $tool"
|
||||||
|
} else {
|
||||||
|
Write-CILog "FAIL" "Tool missing: $tool"
|
||||||
|
$exitCode = 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check Node.js modules
|
||||||
|
$packageJson = Join-Path $AgentsDir "package.json"
|
||||||
|
if (Test-Path $packageJson) {
|
||||||
|
Push-Location $AgentsDir
|
||||||
|
try {
|
||||||
|
npm list --depth=0 | Out-Null
|
||||||
|
Write-CILog "PASS" "Node.js dependencies installed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "WARN" "Installing Node.js dependencies..."
|
||||||
|
npm install | Out-File -FilePath (Join-Path $workspace "npm-install.log")
|
||||||
|
if ($LASTEXITCODE -ne 0) {
|
||||||
|
Write-CILog "FAIL" "Failed to install Node.js dependencies"
|
||||||
|
$exitCode = 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Pop-Location
|
||||||
|
}
|
||||||
|
|
||||||
|
# 2. Full test suite
|
||||||
|
Write-CILog "INFO" "Running full test suite..."
|
||||||
|
|
||||||
|
# Integration tests
|
||||||
|
$integrationTest = Join-Path $AgentsDir "tests\skill-integration.test.js"
|
||||||
|
if (Test-Path $integrationTest) {
|
||||||
|
try {
|
||||||
|
node $integrationTest | Out-File -FilePath (Join-Path $workspace "integration-tests.log")
|
||||||
|
Write-CILog "PASS" "Integration tests passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "FAIL" "Integration tests failed"
|
||||||
|
$exitCode = 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Workflow validation tests
|
||||||
|
$workflowTest = Join-Path $AgentsDir "tests\workflow-validation.test.js"
|
||||||
|
if (Test-Path $workflowTest) {
|
||||||
|
try {
|
||||||
|
node $workflowTest | Out-File -FilePath (Join-Path $workspace "workflow-tests.log")
|
||||||
|
Write-CILog "PASS" "Workflow validation tests passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "FAIL" "Workflow validation tests failed"
|
||||||
|
$exitCode = 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# 3. Comprehensive validation
|
||||||
|
Write-CILog "INFO" "Running comprehensive validation..."
|
||||||
|
|
||||||
|
# Health monitoring
|
||||||
|
$healthScript = Join-Path $AgentsDir "scripts\health-monitor.js"
|
||||||
|
if (Test-Path $healthScript) {
|
||||||
|
try {
|
||||||
|
node $healthScript | Out-File -FilePath (Join-Path $workspace "health-check.log")
|
||||||
|
Write-CILog "PASS" "Health monitoring passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "FAIL" "Health monitoring failed"
|
||||||
|
$exitCode = 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Advanced validation
|
||||||
|
$advancedScript = Join-Path $AgentsDir "scripts\advanced-validator.js"
|
||||||
|
if (Test-Path $advancedScript) {
|
||||||
|
try {
|
||||||
|
node $advancedScript | Out-File -FilePath (Join-Path $workspace "advanced-validation.log")
|
||||||
|
Write-CILog "PASS" "Advanced validation passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "WARN" "Advanced validation found issues"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Dependency validation
|
||||||
|
$dependencyScript = Join-Path $AgentsDir "scripts\dependency-validator.js"
|
||||||
|
if (Test-Path $dependencyScript) {
|
||||||
|
try {
|
||||||
|
node $dependencyScript | Out-File -FilePath (Join-Path $workspace "dependency-validation.log")
|
||||||
|
Write-CILog "PASS" "Dependency validation passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "WARN" "Dependency validation found issues"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Performance monitoring
|
||||||
|
$performanceScript = Join-Path $AgentsDir "scripts\performance-monitor.js"
|
||||||
|
if (Test-Path $performanceScript) {
|
||||||
|
try {
|
||||||
|
node $performanceScript | Out-File -FilePath (Join-Path $workspace "performance-monitor.log")
|
||||||
|
Write-CILog "PASS" "Performance monitoring passed"
|
||||||
|
} catch {
|
||||||
|
Write-CILog "WARN" "Performance monitoring found issues"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# 4. Generate artifacts
|
||||||
|
Write-CILog "INFO" "Generating CI artifacts..."
|
||||||
|
|
||||||
|
$pipelineEnd = Get-Date
|
||||||
|
$duration = ($pipelineEnd - $pipelineStart).TotalSeconds
|
||||||
|
|
||||||
|
# Consolidated report
|
||||||
|
$reportFile = Join-Path $workspace "ci-pipeline-report.json"
|
||||||
|
$report = @{
|
||||||
|
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
|
||||||
|
pipeline_type = "full_ci"
|
||||||
|
duration_seconds = [int]$duration
|
||||||
|
exit_code = $exitCode
|
||||||
|
environment = @{
|
||||||
|
node_version = (node --version)
|
||||||
|
platform = $env:OS
|
||||||
|
working_directory = $BaseDir
|
||||||
|
}
|
||||||
|
checks_performed = @(
|
||||||
|
"environment_validation",
|
||||||
|
"integration_tests",
|
||||||
|
"workflow_validation_tests",
|
||||||
|
"health_monitoring",
|
||||||
|
"advanced_validation",
|
||||||
|
"dependency_validation",
|
||||||
|
"performance_monitoring"
|
||||||
|
)
|
||||||
|
artifacts = @(
|
||||||
|
"integration-tests.log",
|
||||||
|
"workflow-tests.log",
|
||||||
|
"health-check.log",
|
||||||
|
"advanced-validation.log",
|
||||||
|
"dependency-validation.log",
|
||||||
|
"performance-monitor.log",
|
||||||
|
"npm-install.log"
|
||||||
|
)
|
||||||
|
workspace = $workspace
|
||||||
|
}
|
||||||
|
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
|
||||||
|
|
||||||
|
Write-CILog "INFO" "CI pipeline report saved to: $reportFile"
|
||||||
|
Write-CILog "INFO" "CI artifacts saved to: $workspace"
|
||||||
|
Write-CILog "INFO" "Pipeline duration: $([int]$duration)s"
|
||||||
|
|
||||||
|
if ($exitCode -eq 0) {
|
||||||
|
Write-CILog "PASS" "CI pipeline completed successfully"
|
||||||
|
} else {
|
||||||
|
Write-CILog "FAIL" "CI pipeline failed"
|
||||||
|
}
|
||||||
|
|
||||||
|
return $exitCode
|
||||||
|
}
|
||||||
|
|
||||||
|
# Install Git hooks
|
||||||
|
function Install-GitHooks {
|
||||||
|
Write-CILog "INFO" "Installing Git hooks..."
|
||||||
|
|
||||||
|
$hooksDir = Join-Path $BaseDir ".git\hooks"
|
||||||
|
$agentsHooksDir = Join-Path $AgentsDir "scripts\git-hooks"
|
||||||
|
|
||||||
|
# Create git-hooks directory
|
||||||
|
if (-not (Test-Path $agentsHooksDir)) {
|
||||||
|
New-Item -ItemType Directory -Path $agentsHooksDir -Force | Out-Null
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create pre-commit hook
|
||||||
|
$preCommitContent = @'
|
||||||
|
#!/bin/bash
|
||||||
|
# Pre-commit hook for .agents validation
|
||||||
|
echo "Running .agents pre-commit validation..."
|
||||||
|
if bash .agents/scripts/ci-hooks.sh pre-commit; then
|
||||||
|
echo "Pre-commit validation passed"
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
echo "Pre-commit validation failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
'@
|
||||||
|
$preCommitContent | Out-File -FilePath (Join-Path $agentsHooksDir "pre-commit") -Encoding UTF8
|
||||||
|
|
||||||
|
# Create pre-push hook
|
||||||
|
$prePushContent = @'
|
||||||
|
#!/bin/bash
|
||||||
|
# Pre-push hook for .agents validation
|
||||||
|
echo "Running .agents pre-push validation..."
|
||||||
|
if bash .agents/scripts/ci-hooks.sh pre-push; then
|
||||||
|
echo "Pre-push validation passed"
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
echo "Pre-push validation failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
'@
|
||||||
|
$prePushContent | Out-File -FilePath (Join-Path $agentsHooksDir "pre-push") -Encoding UTF8
|
||||||
|
|
||||||
|
# Install hooks if .git directory exists
|
||||||
|
if (Test-Path $hooksDir) {
|
||||||
|
Copy-Item (Join-Path $agentsHooksDir "pre-commit") $hooksDir -Force
|
||||||
|
Copy-Item (Join-Path $agentsHooksDir "pre-push") $hooksDir -Force
|
||||||
|
Write-CILog "PASS" "Git hooks installed successfully"
|
||||||
|
} else {
|
||||||
|
Write-CILog "WARN" "Git repository not found, hooks copied to .agents\scripts\git-hooks"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main execution
|
||||||
|
switch ($Command) {
|
||||||
|
"pre-commit" {
|
||||||
|
exit (Invoke-PreCommitHook)
|
||||||
|
}
|
||||||
|
"pre-push" {
|
||||||
|
exit (Invoke-PrePushHook)
|
||||||
|
}
|
||||||
|
"ci-pipeline" {
|
||||||
|
exit (Invoke-CIPipelineHook)
|
||||||
|
}
|
||||||
|
"install-hooks" {
|
||||||
|
Install-GitHooks
|
||||||
|
}
|
||||||
|
"help" {
|
||||||
|
Write-Host "Usage: .\ci-hooks.ps1 -Command {pre-commit|pre-push|ci-pipeline|install-hooks|help}"
|
||||||
|
Write-Host ""
|
||||||
|
Write-Host "Commands:"
|
||||||
|
Write-Host " pre-commit - Run pre-commit validation"
|
||||||
|
Write-Host " pre-push - Run pre-push validation"
|
||||||
|
Write-Host " ci-pipeline - Run full CI pipeline"
|
||||||
|
Write-Host " install-hooks - Install Git hooks"
|
||||||
|
Write-Host " help - Show this help"
|
||||||
|
}
|
||||||
|
default {
|
||||||
|
Write-Host "Unknown command: $Command"
|
||||||
|
Write-Host "Use 'help' to see available commands"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,445 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# ci-hooks.sh - Continuous integration hooks for .agents
|
||||||
|
# Part of LCBP3-DMS Phase 3 enhancements
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Base directory
|
||||||
|
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
|
||||||
|
AGENTS_DIR="$BASE_DIR/.agents"
|
||||||
|
|
||||||
|
# CI configuration
|
||||||
|
CI_LOG_DIR="$AGENTS_DIR/logs/ci"
|
||||||
|
CI_REPORT_DIR="$AGENTS_DIR/reports/ci"
|
||||||
|
|
||||||
|
# Ensure directories exist
|
||||||
|
mkdir -p "$CI_LOG_DIR" "$CI_REPORT_DIR"
|
||||||
|
|
||||||
|
# Logging function
|
||||||
|
ci_log() {
|
||||||
|
local level="$1"
|
||||||
|
local message="$2"
|
||||||
|
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||||
|
local log_file="$CI_LOG_DIR/ci-$(date '+%Y-%m-%d').log"
|
||||||
|
|
||||||
|
echo "[$timestamp] [$level] $message" | tee -a "$log_file"
|
||||||
|
|
||||||
|
# Console output with colors
|
||||||
|
case "$level" in
|
||||||
|
"INFO") echo -e "${BLUE}$message${NC}" ;;
|
||||||
|
"PASS") echo -e "${GREEN}$message${NC}" ;;
|
||||||
|
"WARN") echo -e "${YELLOW}$message${NC}" ;;
|
||||||
|
"FAIL") echo -e "${RED}$message${NC}" ;;
|
||||||
|
*) echo "$message" ;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
# Pre-commit hook
|
||||||
|
pre_commit_hook() {
|
||||||
|
ci_log "INFO" "Running pre-commit validation..."
|
||||||
|
|
||||||
|
local exit_code=0
|
||||||
|
|
||||||
|
# 1. Run version validation
|
||||||
|
ci_log "INFO" "Checking version consistency..."
|
||||||
|
if "$AGENTS_DIR/scripts/bash/validate-versions.sh" >> "$CI_LOG_DIR/pre-commit-versions.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Version validation passed"
|
||||||
|
else
|
||||||
|
ci_log "FAIL" "Version validation failed"
|
||||||
|
exit_code=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 2. Run skill audit
|
||||||
|
ci_log "INFO" "Auditing skills..."
|
||||||
|
if "$AGENTS_DIR/scripts/bash/audit-skills.sh" >> "$CI_LOG_DIR/pre-commit-skills.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Skill audit passed"
|
||||||
|
else
|
||||||
|
ci_log "FAIL" "Skill audit failed"
|
||||||
|
exit_code=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 3. Run integration tests (if Node.js available)
|
||||||
|
if command -v node >/dev/null 2>&1; then
|
||||||
|
ci_log "INFO" "Running integration tests..."
|
||||||
|
if node "$AGENTS_DIR/tests/skill-integration.test.js" >> "$CI_LOG_DIR/pre-commit-tests.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Integration tests passed"
|
||||||
|
else
|
||||||
|
ci_log "WARN" "Integration tests failed (non-blocking)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
ci_log "WARN" "Node.js not available, skipping integration tests"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 4. Check for forbidden patterns
|
||||||
|
ci_log "INFO" "Checking for forbidden patterns..."
|
||||||
|
local forbidden_patterns=("TODO" "FIXME" "XXX" "HACK")
|
||||||
|
local found_forbidden=false
|
||||||
|
|
||||||
|
for pattern in "${forbidden_patterns[@]}"; do
|
||||||
|
if grep -r "$pattern" "$AGENTS_DIR/skills" --include="*.md" >/dev/null 2>&1; then
|
||||||
|
ci_log "WARN" "Found forbidden pattern: $pattern"
|
||||||
|
found_forbidden=true
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ "$found_forbidden" = false ]; then
|
||||||
|
ci_log "PASS" "No forbidden patterns found"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Generate pre-commit report
|
||||||
|
local report_file="$CI_REPORT_DIR/pre-commit-$(date '+%Y%m%d-%H%M%S').json"
|
||||||
|
cat > "$report_file" << EOF
|
||||||
|
{
|
||||||
|
"timestamp": "$(date -Iseconds)",
|
||||||
|
"hook_type": "pre-commit",
|
||||||
|
"exit_code": $exit_code,
|
||||||
|
"checks_performed": [
|
||||||
|
"version_validation",
|
||||||
|
"skill_audit",
|
||||||
|
"integration_tests",
|
||||||
|
"forbidden_patterns"
|
||||||
|
],
|
||||||
|
"log_files": [
|
||||||
|
"pre-commit-versions.log",
|
||||||
|
"pre-commit-skills.log",
|
||||||
|
"pre-commit-tests.log"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
ci_log "INFO" "Pre-commit report saved to: $report_file"
|
||||||
|
|
||||||
|
if [ $exit_code -eq 0 ]; then
|
||||||
|
ci_log "PASS" "Pre-commit validation completed successfully"
|
||||||
|
else
|
||||||
|
ci_log "FAIL" "Pre-commit validation failed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
return $exit_code
|
||||||
|
}
|
||||||
|
|
||||||
|
# Pre-push hook
|
||||||
|
pre_push_hook() {
|
||||||
|
ci_log "INFO" "Running pre-push validation..."
|
||||||
|
|
||||||
|
local exit_code=0
|
||||||
|
|
||||||
|
# 1. Full health check
|
||||||
|
ci_log "INFO" "Running full health check..."
|
||||||
|
if command -v node >/dev/null 2>&1; then
|
||||||
|
if node "$AGENTS_DIR/scripts/health-monitor.js" >> "$CI_LOG_DIR/pre-push-health.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Health check passed"
|
||||||
|
else
|
||||||
|
ci_log "FAIL" "Health check failed"
|
||||||
|
exit_code=1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
ci_log "WARN" "Node.js not available, using basic health check"
|
||||||
|
if "$AGENTS_DIR/scripts/bash/audit-skills.sh" >> "$CI_LOG_DIR/pre-push-basic.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Basic health check passed"
|
||||||
|
else
|
||||||
|
ci_log "FAIL" "Basic health check failed"
|
||||||
|
exit_code=1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 2. Advanced validation (if available)
|
||||||
|
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/advanced-validator.js" ]; then
|
||||||
|
ci_log "INFO" "Running advanced validation..."
|
||||||
|
if node "$AGENTS_DIR/scripts/advanced-validator.js" >> "$CI_LOG_DIR/pre-push-advanced.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Advanced validation passed"
|
||||||
|
else
|
||||||
|
ci_log "WARN" "Advanced validation found issues (non-blocking)"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 3. Dependency validation
|
||||||
|
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/dependency-validator.js" ]; then
|
||||||
|
ci_log "INFO" "Running dependency validation..."
|
||||||
|
if node "$AGENTS_DIR/scripts/dependency-validator.js" >> "$CI_LOG_DIR/pre-push-dependencies.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Dependency validation passed"
|
||||||
|
else
|
||||||
|
ci_log "WARN" "Dependency validation found issues (non-blocking)"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 4. Performance monitoring
|
||||||
|
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/performance-monitor.js" ]; then
|
||||||
|
ci_log "INFO" "Running performance monitoring..."
|
||||||
|
if node "$AGENTS_DIR/scripts/performance-monitor.js" >> "$CI_LOG_DIR/pre-push-performance.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Performance monitoring passed"
|
||||||
|
else
|
||||||
|
ci_log "WARN" "Performance monitoring found issues (non-blocking)"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Generate pre-push report
|
||||||
|
local report_file="$CI_REPORT_DIR/pre-push-$(date '+%Y%m%d-%H%M%S').json"
|
||||||
|
cat > "$report_file" << EOF
|
||||||
|
{
|
||||||
|
"timestamp": "$(date -Iseconds)",
|
||||||
|
"hook_type": "pre-push",
|
||||||
|
"exit_code": $exit_code,
|
||||||
|
"checks_performed": [
|
||||||
|
"health_check",
|
||||||
|
"advanced_validation",
|
||||||
|
"dependency_validation",
|
||||||
|
"performance_monitoring"
|
||||||
|
],
|
||||||
|
"log_files": [
|
||||||
|
"pre-push-health.log",
|
||||||
|
"pre-push-advanced.log",
|
||||||
|
"pre-push-dependencies.log",
|
||||||
|
"pre-push-performance.log"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
ci_log "INFO" "Pre-push report saved to: $report_file"
|
||||||
|
|
||||||
|
if [ $exit_code -eq 0 ]; then
|
||||||
|
ci_log "PASS" "Pre-push validation completed successfully"
|
||||||
|
else
|
||||||
|
ci_log "FAIL" "Pre-push validation failed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
return $exit_code
|
||||||
|
}
|
||||||
|
|
||||||
|
# CI pipeline hook
|
||||||
|
ci_pipeline_hook() {
|
||||||
|
ci_log "INFO" "Running CI pipeline validation..."
|
||||||
|
|
||||||
|
local exit_code=0
|
||||||
|
local pipeline_start=$(date +%s)
|
||||||
|
|
||||||
|
# Create pipeline workspace
|
||||||
|
local workspace="$CI_REPORT_DIR/pipeline-$(date '+%Y%m%d-%H%M%S')"
|
||||||
|
mkdir -p "$workspace"
|
||||||
|
|
||||||
|
# 1. Environment validation
|
||||||
|
ci_log "INFO" "Validating CI environment..."
|
||||||
|
|
||||||
|
# Check required tools
|
||||||
|
local required_tools=("node" "npm")
|
||||||
|
for tool in "${required_tools[@]}"; do
|
||||||
|
if command -v "$tool" >/dev/null 2>&1; then
|
||||||
|
ci_log "PASS" "Tool available: $tool"
|
||||||
|
else
|
||||||
|
ci_log "FAIL" "Tool missing: $tool"
|
||||||
|
exit_code=1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# Check Node.js modules
|
||||||
|
if [ -f "$AGENTS_DIR/package.json" ]; then
|
||||||
|
cd "$AGENTS_DIR"
|
||||||
|
if npm list --depth=0 >/dev/null 2>&1; then
|
||||||
|
ci_log "PASS" "Node.js dependencies installed"
|
||||||
|
else
|
||||||
|
ci_log "WARN" "Installing Node.js dependencies..."
|
||||||
|
npm install >> "$workspace/npm-install.log" 2>&1 || {
|
||||||
|
ci_log "FAIL" "Failed to install Node.js dependencies"
|
||||||
|
exit_code=1
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
cd "$BASE_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 2. Full test suite
|
||||||
|
ci_log "INFO" "Running full test suite..."
|
||||||
|
|
||||||
|
# Integration tests
|
||||||
|
if node "$AGENTS_DIR/tests/skill-integration.test.js" >> "$workspace/integration-tests.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Integration tests passed"
|
||||||
|
else
|
||||||
|
ci_log "FAIL" "Integration tests failed"
|
||||||
|
exit_code=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Workflow validation tests
|
||||||
|
if node "$AGENTS_DIR/tests/workflow-validation.test.js" >> "$workspace/workflow-tests.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Workflow validation tests passed"
|
||||||
|
else
|
||||||
|
ci_log "FAIL" "Workflow validation tests failed"
|
||||||
|
exit_code=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 3. Comprehensive validation
|
||||||
|
ci_log "INFO" "Running comprehensive validation..."
|
||||||
|
|
||||||
|
# Health monitoring
|
||||||
|
if node "$AGENTS_DIR/scripts/health-monitor.js" >> "$workspace/health-check.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Health monitoring passed"
|
||||||
|
else
|
||||||
|
ci_log "FAIL" "Health monitoring failed"
|
||||||
|
exit_code=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Advanced validation
|
||||||
|
if node "$AGENTS_DIR/scripts/advanced-validator.js" >> "$workspace/advanced-validation.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Advanced validation passed"
|
||||||
|
else
|
||||||
|
ci_log "WARN" "Advanced validation found issues"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Dependency validation
|
||||||
|
if node "$AGENTS_DIR/scripts/dependency-validator.js" >> "$workspace/dependency-validation.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Dependency validation passed"
|
||||||
|
else
|
||||||
|
ci_log "WARN" "Dependency validation found issues"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Performance monitoring
|
||||||
|
if node "$AGENTS_DIR/scripts/performance-monitor.js" >> "$workspace/performance-monitor.log" 2>&1; then
|
||||||
|
ci_log "PASS" "Performance monitoring passed"
|
||||||
|
else
|
||||||
|
ci_log "WARN" "Performance monitoring found issues"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 4. Generate artifacts
|
||||||
|
ci_log "INFO" "Generating CI artifacts..."
|
||||||
|
|
||||||
|
local pipeline_end=$(date +%s)
|
||||||
|
local duration=$((pipeline_end - pipeline_start))
|
||||||
|
|
||||||
|
# Consolidated report
|
||||||
|
local report_file="$workspace/ci-pipeline-report.json"
|
||||||
|
cat > "$report_file" << EOF
|
||||||
|
{
|
||||||
|
"timestamp": "$(date -Iseconds)",
|
||||||
|
"pipeline_type": "full_ci",
|
||||||
|
"duration_seconds": $duration,
|
||||||
|
"exit_code": $exit_code,
|
||||||
|
"environment": {
|
||||||
|
"node_version": "$(node --version)",
|
||||||
|
"platform": "$(uname -s)",
|
||||||
|
"working_directory": "$BASE_DIR"
|
||||||
|
},
|
||||||
|
"checks_performed": [
|
||||||
|
"environment_validation",
|
||||||
|
"integration_tests",
|
||||||
|
"workflow_validation_tests",
|
||||||
|
"health_monitoring",
|
||||||
|
"advanced_validation",
|
||||||
|
"dependency_validation",
|
||||||
|
"performance_monitoring"
|
||||||
|
],
|
||||||
|
"artifacts": [
|
||||||
|
"integration-tests.log",
|
||||||
|
"workflow-tests.log",
|
||||||
|
"health-check.log",
|
||||||
|
"advanced-validation.log",
|
||||||
|
"dependency-validation.log",
|
||||||
|
"performance-monitor.log",
|
||||||
|
"npm-install.log"
|
||||||
|
],
|
||||||
|
"workspace": "$workspace"
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
ci_log "INFO" "CI pipeline report saved to: $report_file"
|
||||||
|
ci_log "INFO" "CI artifacts saved to: $workspace"
|
||||||
|
ci_log "INFO" "Pipeline duration: ${duration}s"
|
||||||
|
|
||||||
|
if [ $exit_code -eq 0 ]; then
|
||||||
|
ci_log "PASS" "CI pipeline completed successfully"
|
||||||
|
else
|
||||||
|
ci_log "FAIL" "CI pipeline failed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
return $exit_code
|
||||||
|
}
|
||||||
|
|
||||||
|
# Install Git hooks
|
||||||
|
install_git_hooks() {
|
||||||
|
ci_log "INFO" "Installing Git hooks..."
|
||||||
|
|
||||||
|
local hooks_dir="$BASE_DIR/.git/hooks"
|
||||||
|
local agents_hooks_dir="$AGENTS_DIR/scripts/git-hooks"
|
||||||
|
|
||||||
|
# Create git-hooks directory
|
||||||
|
mkdir -p "$agents_hooks_dir"
|
||||||
|
|
||||||
|
# Create pre-commit hook
|
||||||
|
cat > "$agents_hooks_dir/pre-commit" << 'EOF'
|
||||||
|
#!/bin/bash
|
||||||
|
# Pre-commit hook for .agents validation
|
||||||
|
echo "Running .agents pre-commit validation..."
|
||||||
|
if bash .agents/scripts/ci-hooks.sh pre-commit; then
|
||||||
|
echo "Pre-commit validation passed"
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
echo "Pre-commit validation failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# Create pre-push hook
|
||||||
|
cat > "$agents_hooks_dir/pre-push" << 'EOF'
|
||||||
|
#!/bin/bash
|
||||||
|
# Pre-push hook for .agents validation
|
||||||
|
echo "Running .agents pre-push validation..."
|
||||||
|
if bash .agents/scripts/ci-hooks.sh pre-push; then
|
||||||
|
echo "Pre-push validation passed"
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
echo "Pre-push validation failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# Make hooks executable
|
||||||
|
chmod +x "$agents_hooks_dir/pre-commit"
|
||||||
|
chmod +x "$agents_hooks_dir/pre-push"
|
||||||
|
|
||||||
|
# Install hooks if .git directory exists
|
||||||
|
if [ -d "$hooks_dir" ]; then
|
||||||
|
cp "$agents_hooks_dir/pre-commit" "$hooks_dir/"
|
||||||
|
cp "$agents_hooks_dir/pre-push" "$hooks_dir/"
|
||||||
|
ci_log "PASS" "Git hooks installed successfully"
|
||||||
|
else
|
||||||
|
ci_log "WARN" "Git repository not found, hooks copied to .agents/scripts/git-hooks"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main function
|
||||||
|
main() {
|
||||||
|
local command="${1:-help}"
|
||||||
|
|
||||||
|
case "$command" in
|
||||||
|
"pre-commit")
|
||||||
|
pre_commit_hook
|
||||||
|
;;
|
||||||
|
"pre-push")
|
||||||
|
pre_push_hook
|
||||||
|
;;
|
||||||
|
"ci-pipeline")
|
||||||
|
ci_pipeline_hook
|
||||||
|
;;
|
||||||
|
"install-hooks")
|
||||||
|
install_git_hooks
|
||||||
|
;;
|
||||||
|
"help"|*)
|
||||||
|
echo "Usage: $0 {pre-commit|pre-push|ci-pipeline|install-hooks|help}"
|
||||||
|
echo ""
|
||||||
|
echo "Commands:"
|
||||||
|
echo " pre-commit - Run pre-commit validation"
|
||||||
|
echo " pre-push - Run pre-push validation"
|
||||||
|
echo " ci-pipeline - Run full CI pipeline"
|
||||||
|
echo " install-hooks - Install Git hooks"
|
||||||
|
echo " help - Show this help"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
# Run main function with all arguments
|
||||||
|
main "$@"
|
||||||
@@ -0,0 +1,457 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
/**
|
||||||
|
* dependency-validator.js - Skill dependency validation system
|
||||||
|
* Part of LCBP3-DMS Phase 3 enhancements
|
||||||
|
*/
|
||||||
|
|
||||||
|
const fs = require('fs');
|
||||||
|
const path = require('path');
|
||||||
|
const yaml = require('js-yaml');
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
const BASE_DIR = path.resolve(__dirname, '../..');
|
||||||
|
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
|
||||||
|
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
|
||||||
|
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
|
||||||
|
|
||||||
|
// Dependency validation class
|
||||||
|
class DependencyValidator {
|
||||||
|
constructor() {
|
||||||
|
this.validationResults = {
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
dependency_graph: {},
|
||||||
|
circular_dependencies: [],
|
||||||
|
missing_dependencies: [],
|
||||||
|
orphaned_skills: [],
|
||||||
|
dependency_chains: {},
|
||||||
|
validation_summary: {
|
||||||
|
total_skills: 0,
|
||||||
|
skills_with_dependencies: 0,
|
||||||
|
circular_dependencies_found: 0,
|
||||||
|
missing_dependencies_found: 0,
|
||||||
|
orphaned_skills_found: 0,
|
||||||
|
max_dependency_depth: 0,
|
||||||
|
validation_status: 'unknown'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
log(message, level = 'info') {
|
||||||
|
const colors = {
|
||||||
|
info: '\x1b[36m', // Cyan
|
||||||
|
pass: '\x1b[32m', // Green
|
||||||
|
fail: '\x1b[31m', // Red
|
||||||
|
warn: '\x1b[33m', // Yellow
|
||||||
|
critical: '\x1b[35m', // Magenta
|
||||||
|
reset: '\x1b[0m'
|
||||||
|
};
|
||||||
|
|
||||||
|
const color = colors[level] || colors.info;
|
||||||
|
console.log(`${color}[${level.toUpperCase()}] ${message}${colors.reset}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
extractSkillDependencies(skillPath, skillName) {
|
||||||
|
const skillMdPath = path.join(skillPath, 'SKILL.md');
|
||||||
|
|
||||||
|
if (!fs.existsSync(skillMdPath)) {
|
||||||
|
this.log(`No SKILL.md found for ${skillName}`, 'warn');
|
||||||
|
return { dependencies: [], handoffs: [], error: 'SKILL.md not found' };
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = fs.readFileSync(skillMdPath, 'utf8');
|
||||||
|
|
||||||
|
// Extract dependencies from front matter
|
||||||
|
let dependencies = [];
|
||||||
|
let handoffs = [];
|
||||||
|
|
||||||
|
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
|
||||||
|
if (frontMatterMatch) {
|
||||||
|
try {
|
||||||
|
const frontMatter = yaml.load(frontMatterMatch[1]);
|
||||||
|
|
||||||
|
// Handle depends-on field
|
||||||
|
if (frontMatter['depends-on']) {
|
||||||
|
if (Array.isArray(frontMatter['depends-on'])) {
|
||||||
|
dependencies = frontMatter['depends-on'];
|
||||||
|
} else {
|
||||||
|
dependencies = [frontMatter['depends-on']];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle handoffs field
|
||||||
|
if (frontMatter.handoffs && Array.isArray(frontMatter.handoffs)) {
|
||||||
|
handoffs = frontMatter.handoffs.map(h => h.agent);
|
||||||
|
}
|
||||||
|
|
||||||
|
} catch (yamlError) {
|
||||||
|
this.log(`Invalid YAML in ${skillName} front matter: ${yamlError.message}`, 'warn');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also extract skill references from content
|
||||||
|
const contentSkillRefs = content.match(/@speckit-\w+/g) || [];
|
||||||
|
const contentDependencies = contentSkillRefs.map(ref => ref.replace('@', ''));
|
||||||
|
|
||||||
|
// Merge dependencies (avoid duplicates)
|
||||||
|
const allDependencies = [...new Set([...dependencies, ...contentDependencies])];
|
||||||
|
|
||||||
|
return {
|
||||||
|
dependencies: allDependencies,
|
||||||
|
handoffs: handoffs,
|
||||||
|
content_references: contentSkillRefs,
|
||||||
|
front_matter_dependencies: dependencies,
|
||||||
|
error: null
|
||||||
|
};
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
this.log(`Error reading ${skillName}: ${error.message}`, 'warn');
|
||||||
|
return { dependencies: [], handoffs: [], error: error.message };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
buildDependencyGraph() {
|
||||||
|
this.log('Building dependency graph...', 'info');
|
||||||
|
|
||||||
|
if (!fs.existsSync(SKILLS_DIR)) {
|
||||||
|
this.log('Skills directory not found', 'fail');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
|
||||||
|
const itemPath = path.join(SKILLS_DIR, item);
|
||||||
|
return fs.statSync(itemPath).isDirectory();
|
||||||
|
});
|
||||||
|
|
||||||
|
this.validationResults.validation_summary.total_skills = skillDirs.length;
|
||||||
|
|
||||||
|
// Extract dependencies for each skill
|
||||||
|
for (const skillDir of skillDirs) {
|
||||||
|
const skillPath = path.join(SKILLS_DIR, skillDir);
|
||||||
|
const dependencyInfo = this.extractSkillDependencies(skillPath, skillDir);
|
||||||
|
|
||||||
|
this.validationResults.dependency_graph[skillDir] = dependencyInfo;
|
||||||
|
|
||||||
|
if (dependencyInfo.dependencies.length > 0 || dependencyInfo.handoffs.length > 0) {
|
||||||
|
this.validationResults.validation_summary.skills_with_dependencies++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
this.log(`Analyzed ${skillDirs.length} skills`, 'info');
|
||||||
|
this.log(`Skills with dependencies: ${this.validationResults.validation_summary.skills_with_dependencies}`, 'info');
|
||||||
|
}
|
||||||
|
|
||||||
|
validateDependencies() {
|
||||||
|
this.log('Validating dependencies...', 'info');
|
||||||
|
|
||||||
|
const { dependency_graph } = this.validationResults;
|
||||||
|
const allSkills = Object.keys(dependency_graph);
|
||||||
|
|
||||||
|
// Check for missing dependencies
|
||||||
|
for (const [skillName, dependencyInfo] of Object.entries(dependency_graph)) {
|
||||||
|
for (const dependency of dependencyInfo.dependencies) {
|
||||||
|
if (!allSkills.includes(dependency)) {
|
||||||
|
this.validationResults.missing_dependencies.push({
|
||||||
|
skill: skillName,
|
||||||
|
missing_dependency: dependency,
|
||||||
|
dependency_type: 'depends-on'
|
||||||
|
});
|
||||||
|
this.validationResults.validation_summary.missing_dependencies_found++;
|
||||||
|
this.log(`Missing dependency: ${skillName} depends on ${dependency}`, 'fail');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const handoff of dependencyInfo.handoffs) {
|
||||||
|
if (!allSkills.includes(handoff)) {
|
||||||
|
this.validationResults.missing_dependencies.push({
|
||||||
|
skill: skillName,
|
||||||
|
missing_dependency: handoff,
|
||||||
|
dependency_type: 'handoff'
|
||||||
|
});
|
||||||
|
this.validationResults.validation_summary.missing_dependencies_found++;
|
||||||
|
this.log(`Missing handoff: ${skillName} hands off to ${handoff}`, 'fail');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for orphaned skills (no one depends on them)
|
||||||
|
const dependedOnSkills = new Set();
|
||||||
|
for (const dependencyInfo of Object.values(dependency_graph)) {
|
||||||
|
dependencyInfo.dependencies.forEach(dep => dependedOnSkills.add(dep));
|
||||||
|
dependencyInfo.handoffs.forEach(handoff => dependedOnSkills.add(handoff));
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const skill of allSkills) {
|
||||||
|
if (!dependedOnSkills.has(skill) && skill !== 'speckit-constitution') {
|
||||||
|
// Constitution is allowed to be orphaned (it's a starting point)
|
||||||
|
this.validationResults.orphaned_skills.push(skill);
|
||||||
|
this.validationResults.validation_summary.orphaned_skills_found++;
|
||||||
|
this.log(`Orphaned skill: ${skill} (no dependencies on it)`, 'warn');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
detectCircularDependencies() {
|
||||||
|
this.log('Detecting circular dependencies...', 'info');
|
||||||
|
|
||||||
|
const { dependency_graph } = this.validationResults;
|
||||||
|
const visited = new Set();
|
||||||
|
const recursionStack = new Set();
|
||||||
|
const circularDeps = [];
|
||||||
|
|
||||||
|
function dfs(skillName, path = []) {
|
||||||
|
if (recursionStack.has(skillName)) {
|
||||||
|
// Found circular dependency
|
||||||
|
const cycleStart = path.indexOf(skillName);
|
||||||
|
const cycle = path.slice(cycleStart).concat(skillName);
|
||||||
|
circularDeps.push(cycle);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (visited.has(skillName)) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
visited.add(skillName);
|
||||||
|
recursionStack.add(skillName);
|
||||||
|
path.push(skillName);
|
||||||
|
|
||||||
|
const dependencyInfo = dependency_graph[skillName];
|
||||||
|
if (dependencyInfo) {
|
||||||
|
for (const dependency of dependencyInfo.dependencies) {
|
||||||
|
dfs(dependency, [...path]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
recursionStack.delete(skillName);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run DFS from each skill
|
||||||
|
for (const skillName of Object.keys(dependency_graph)) {
|
||||||
|
if (!visited.has(skillName)) {
|
||||||
|
dfs(skillName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
this.validationResults.circular_dependencies = circularDeps;
|
||||||
|
this.validationResults.validation_summary.circular_dependencies_found = circularDeps.length;
|
||||||
|
|
||||||
|
if (circularDeps.length > 0) {
|
||||||
|
this.log(`Found ${circularDeps.length} circular dependencies:`, 'critical');
|
||||||
|
circularDeps.forEach((cycle, index) => {
|
||||||
|
this.log(` ${index + 1}. ${cycle.join(' -> ')}`, 'critical');
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
this.log('No circular dependencies found', 'pass');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
calculateDependencyChains() {
|
||||||
|
this.log('Calculating dependency chains...', 'info');
|
||||||
|
|
||||||
|
const { dependency_graph } = this.validationResults;
|
||||||
|
const chains = {};
|
||||||
|
|
||||||
|
function calculateDepth(skillName, visited = new Set()) {
|
||||||
|
if (visited.has(skillName)) {
|
||||||
|
return 0; // Circular dependency protection
|
||||||
|
}
|
||||||
|
|
||||||
|
visited.add(skillName);
|
||||||
|
|
||||||
|
const dependencyInfo = dependency_graph[skillName];
|
||||||
|
if (!dependencyInfo || dependencyInfo.dependencies.length === 0) {
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
let maxDepth = 0;
|
||||||
|
for (const dependency of dependencyInfo.dependencies) {
|
||||||
|
const depth = calculateDepth(dependency, new Set(visited));
|
||||||
|
maxDepth = Math.max(maxDepth, depth);
|
||||||
|
}
|
||||||
|
|
||||||
|
return maxDepth + 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
function getDependencyChain(skillName) {
|
||||||
|
const dependencyInfo = dependency_graph[skillName];
|
||||||
|
if (!dependencyInfo || dependencyInfo.dependencies.length === 0) {
|
||||||
|
return [skillName];
|
||||||
|
}
|
||||||
|
|
||||||
|
const chains = [];
|
||||||
|
for (const dependency of dependencyInfo.dependencies) {
|
||||||
|
const depChain = getDependencyChain(dependency);
|
||||||
|
chains.push(depChain.concat(skillName));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Return the longest chain
|
||||||
|
return chains.reduce((longest, current) =>
|
||||||
|
current.length > longest.length ? current : longest, [skillName]
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const skillName of Object.keys(dependency_graph)) {
|
||||||
|
const depth = calculateDepth(skillName);
|
||||||
|
const chain = getDependencyChain(skillName);
|
||||||
|
|
||||||
|
chains[skillName] = {
|
||||||
|
depth: depth,
|
||||||
|
chain: chain,
|
||||||
|
chain_length: chain.length
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
this.validationResults.dependency_chains = chains;
|
||||||
|
|
||||||
|
const maxDepth = Math.max(...Object.values(chains).map(c => c.depth));
|
||||||
|
this.validationResults.validation_summary.max_dependency_depth = maxDepth;
|
||||||
|
|
||||||
|
this.log(`Maximum dependency depth: ${maxDepth}`, 'info');
|
||||||
|
}
|
||||||
|
|
||||||
|
validateWorkflowDependencies() {
|
||||||
|
this.log('Validating workflow dependencies...', 'info');
|
||||||
|
|
||||||
|
if (!fs.existsSync(WORKFLOWS_DIR)) {
|
||||||
|
this.log('Workflows directory not found', 'warn');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const workflowFiles = fs.readdirSync(WORKFLOWS_DIR).filter(file => file.endsWith('.md'));
|
||||||
|
const allSkills = Object.keys(this.validationResults.dependency_graph);
|
||||||
|
|
||||||
|
for (const workflowFile of workflowFiles) {
|
||||||
|
const workflowPath = path.join(WORKFLOWS_DIR, workflowFile);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = fs.readFileSync(workflowPath, 'utf8');
|
||||||
|
const skillReferences = content.match(/@speckit-\w+/g) || [];
|
||||||
|
|
||||||
|
for (const skillRef of skillReferences) {
|
||||||
|
const skillName = skillRef.replace('@', '');
|
||||||
|
|
||||||
|
if (!allSkills.includes(skillName)) {
|
||||||
|
this.validationResults.missing_dependencies.push({
|
||||||
|
workflow: workflowFile,
|
||||||
|
missing_dependency: skillName,
|
||||||
|
dependency_type: 'workflow-reference'
|
||||||
|
});
|
||||||
|
this.validationResults.validation_summary.missing_dependencies_found++;
|
||||||
|
this.log(`Workflow ${workflowFile} references missing skill: ${skillRef}`, 'fail');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
this.log(`Error reading workflow ${workflowFile}: ${error.message}`, 'warn');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
generateDependencyReport() {
|
||||||
|
this.log('Generating dependency report...', 'info');
|
||||||
|
|
||||||
|
// Determine overall validation status
|
||||||
|
const summary = this.validationResults.validation_summary;
|
||||||
|
|
||||||
|
if (summary.circular_dependencies_found > 0) {
|
||||||
|
summary.validation_status = 'critical';
|
||||||
|
} else if (summary.missing_dependencies_found > 0) {
|
||||||
|
summary.validation_status = 'failed';
|
||||||
|
} else if (summary.orphaned_skills_found > 0) {
|
||||||
|
summary.validation_status = 'warning';
|
||||||
|
} else {
|
||||||
|
summary.validation_status = 'passed';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save report
|
||||||
|
const reportPath = path.join(AGENTS_DIR, 'reports', 'dependency-validation.json');
|
||||||
|
const reportsDir = path.dirname(reportPath);
|
||||||
|
|
||||||
|
if (!fs.existsSync(reportsDir)) {
|
||||||
|
fs.mkdirSync(reportsDir, { recursive: true });
|
||||||
|
}
|
||||||
|
|
||||||
|
fs.writeFileSync(reportPath, JSON.stringify(this.validationResults, null, 2));
|
||||||
|
this.log(`Dependency validation report saved to: ${reportPath}`, 'info');
|
||||||
|
}
|
||||||
|
|
||||||
|
printSummary() {
|
||||||
|
const summary = this.validationResults.validation_summary;
|
||||||
|
|
||||||
|
this.log('=== Dependency Validation Summary ===', 'info');
|
||||||
|
this.log(`Total skills: ${summary.total_skills}`, 'info');
|
||||||
|
this.log(`Skills with dependencies: ${summary.skills_with_dependencies}`, 'info');
|
||||||
|
this.log(`Circular dependencies: ${summary.circular_dependencies_found}`, summary.circular_dependencies_found > 0 ? 'critical' : 'pass');
|
||||||
|
this.log(`Missing dependencies: ${summary.missing_dependencies_found}`, summary.missing_dependencies_found > 0 ? 'fail' : 'pass');
|
||||||
|
this.log(`Orphaned skills: ${summary.orphaned_skills_found}`, summary.orphaned_skills_found > 0 ? 'warn' : 'info');
|
||||||
|
this.log(`Max dependency depth: ${summary.max_dependency_depth}`, 'info');
|
||||||
|
this.log(`Validation status: ${summary.validation_status.toUpperCase()}`,
|
||||||
|
summary.validation_status === 'passed' ? 'pass' :
|
||||||
|
summary.validation_status === 'warning' ? 'warn' : 'fail');
|
||||||
|
|
||||||
|
// Show longest dependency chains
|
||||||
|
const chains = this.validationResults.dependency_chains;
|
||||||
|
const sortedChains = Object.entries(chains)
|
||||||
|
.sort(([,a], [,b]) => b.depth - a.depth)
|
||||||
|
.slice(0, 3);
|
||||||
|
|
||||||
|
if (sortedChains.length > 0) {
|
||||||
|
this.log('Top 3 longest dependency chains:', 'info');
|
||||||
|
sortedChains.forEach(([skillName, chainInfo], index) => {
|
||||||
|
this.log(` ${index + 1}. ${chainInfo.chain.join(' -> ')} (depth: ${chainInfo.depth})`, 'info');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async runDependencyValidation() {
|
||||||
|
this.log('Starting dependency validation...', 'info');
|
||||||
|
this.log(`Base directory: ${BASE_DIR}`, 'info');
|
||||||
|
|
||||||
|
// Build dependency graph
|
||||||
|
this.buildDependencyGraph();
|
||||||
|
|
||||||
|
// Validate dependencies
|
||||||
|
this.validateDependencies();
|
||||||
|
|
||||||
|
// Detect circular dependencies
|
||||||
|
this.detectCircularDependencies();
|
||||||
|
|
||||||
|
// Calculate dependency chains
|
||||||
|
this.calculateDependencyChains();
|
||||||
|
|
||||||
|
// Validate workflow dependencies
|
||||||
|
this.validateWorkflowDependencies();
|
||||||
|
|
||||||
|
// Generate report
|
||||||
|
this.generateDependencyReport();
|
||||||
|
|
||||||
|
// Print summary
|
||||||
|
this.printSummary();
|
||||||
|
|
||||||
|
return this.validationResults;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CLI interface
|
||||||
|
async function main() {
|
||||||
|
const validator = new DependencyValidator();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const results = await validator.runDependencyValidation();
|
||||||
|
const status = results.validation_summary.validation_status;
|
||||||
|
process.exit(status === 'passed' || status === 'warning' ? 0 : 1);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Dependency validation failed:', error);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Export for use in other modules
|
||||||
|
module.exports = { DependencyValidator };
|
||||||
|
|
||||||
|
// Run if called directly
|
||||||
|
if (require.main === module) {
|
||||||
|
main();
|
||||||
|
}
|
||||||
@@ -0,0 +1,369 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
/**
|
||||||
|
* health-monitor.js - Automated health monitoring system for .agents
|
||||||
|
* Part of LCBP3-DMS Phase 3 enhancements
|
||||||
|
*/
|
||||||
|
|
||||||
|
const fs = require('fs');
|
||||||
|
const path = require('path');
|
||||||
|
const { execSync } = require('child_process');
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
const BASE_DIR = path.resolve(__dirname, '../..');
|
||||||
|
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
|
||||||
|
const HEALTH_LOG_PATH = path.join(AGENTS_DIR, 'logs', 'health.log');
|
||||||
|
const HEALTH_REPORT_PATH = path.join(AGENTS_DIR, 'reports', 'health-report.json');
|
||||||
|
|
||||||
|
// Ensure directories exist
|
||||||
|
[ path.dirname(HEALTH_LOG_PATH), path.dirname(HEALTH_REPORT_PATH) ].forEach(dir => {
|
||||||
|
if (!fs.existsSync(dir)) {
|
||||||
|
fs.mkdirSync(dir, { recursive: true });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Health monitoring class
|
||||||
|
class HealthMonitor {
|
||||||
|
constructor() {
|
||||||
|
this.startTime = new Date();
|
||||||
|
this.metrics = {
|
||||||
|
timestamp: this.startTime.toISOString(),
|
||||||
|
version: '1.8.6',
|
||||||
|
checks: {},
|
||||||
|
summary: {
|
||||||
|
total_checks: 0,
|
||||||
|
passed_checks: 0,
|
||||||
|
failed_checks: 0,
|
||||||
|
warnings: 0,
|
||||||
|
overall_health: 'unknown'
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
log(message, level = 'info') {
|
||||||
|
const timestamp = new Date().toISOString();
|
||||||
|
const logEntry = `[${timestamp}] [${level.toUpperCase()}] ${message}\n`;
|
||||||
|
|
||||||
|
// Console output with colors
|
||||||
|
const colors = {
|
||||||
|
info: '\x1b[36m', // Cyan
|
||||||
|
pass: '\x1b[32m', // Green
|
||||||
|
fail: '\x1b[31m', // Red
|
||||||
|
warn: '\x1b[33m', // Yellow
|
||||||
|
reset: '\x1b[0m'
|
||||||
|
};
|
||||||
|
|
||||||
|
const color = colors[level] || colors.info;
|
||||||
|
console.log(`${color}${logEntry.trim()}${colors.reset}`);
|
||||||
|
|
||||||
|
// File logging
|
||||||
|
fs.appendFileSync(HEALTH_LOG_PATH, logEntry);
|
||||||
|
}
|
||||||
|
|
||||||
|
checkDirectoryExists(dirPath, checkName) {
|
||||||
|
this.metrics.summary.total_checks++;
|
||||||
|
const exists = fs.existsSync(dirPath);
|
||||||
|
|
||||||
|
this.metrics.checks[checkName] = {
|
||||||
|
type: 'directory_exists',
|
||||||
|
status: exists ? 'pass' : 'fail',
|
||||||
|
path: dirPath,
|
||||||
|
message: exists ? 'Directory exists' : 'Directory missing'
|
||||||
|
};
|
||||||
|
|
||||||
|
if (exists) {
|
||||||
|
this.metrics.summary.passed_checks++;
|
||||||
|
this.log(`${checkName}: PASS - Directory exists`, 'pass');
|
||||||
|
} else {
|
||||||
|
this.metrics.summary.failed_checks++;
|
||||||
|
this.log(`${checkName}: FAIL - Directory missing: ${dirPath}`, 'fail');
|
||||||
|
}
|
||||||
|
|
||||||
|
return exists;
|
||||||
|
}
|
||||||
|
|
||||||
|
checkFileExists(filePath, checkName) {
|
||||||
|
this.metrics.summary.total_checks++;
|
||||||
|
const exists = fs.existsSync(filePath);
|
||||||
|
|
||||||
|
this.metrics.checks[checkName] = {
|
||||||
|
type: 'file_exists',
|
||||||
|
status: exists ? 'pass' : 'fail',
|
||||||
|
path: filePath,
|
||||||
|
message: exists ? 'File exists' : 'File missing'
|
||||||
|
};
|
||||||
|
|
||||||
|
if (exists) {
|
||||||
|
this.metrics.summary.passed_checks++;
|
||||||
|
this.log(`${checkName}: PASS - File exists`, 'pass');
|
||||||
|
} else {
|
||||||
|
this.metrics.summary.failed_checks++;
|
||||||
|
this.log(`${checkName}: FAIL - File missing: ${filePath}`, 'fail');
|
||||||
|
}
|
||||||
|
|
||||||
|
return exists;
|
||||||
|
}
|
||||||
|
|
||||||
|
checkFileVersion(filePath, expectedVersion, checkName) {
|
||||||
|
this.metrics.summary.total_checks++;
|
||||||
|
|
||||||
|
if (!fs.existsSync(filePath)) {
|
||||||
|
this.metrics.summary.failed_checks++;
|
||||||
|
this.metrics.checks[checkName] = {
|
||||||
|
type: 'version_check',
|
||||||
|
status: 'fail',
|
||||||
|
path: filePath,
|
||||||
|
message: 'File does not exist'
|
||||||
|
};
|
||||||
|
this.log(`${checkName}: FAIL - File not found: ${filePath}`, 'fail');
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = fs.readFileSync(filePath, 'utf8');
|
||||||
|
const versionMatch = content.match(/v?(\d+\.\d+\.\d+)/);
|
||||||
|
const actualVersion = versionMatch ? versionMatch[1] : 'not_found';
|
||||||
|
const versionMatches = actualVersion === expectedVersion;
|
||||||
|
|
||||||
|
this.metrics.checks[checkName] = {
|
||||||
|
type: 'version_check',
|
||||||
|
status: versionMatches ? 'pass' : 'fail',
|
||||||
|
path: filePath,
|
||||||
|
expected_version: expectedVersion,
|
||||||
|
actual_version: actualVersion,
|
||||||
|
message: versionMatches ? 'Version matches' : `Version mismatch (expected ${expectedVersion}, found ${actualVersion})`
|
||||||
|
};
|
||||||
|
|
||||||
|
if (versionMatches) {
|
||||||
|
this.metrics.summary.passed_checks++;
|
||||||
|
this.log(`${checkName}: PASS - Version ${actualVersion}`, 'pass');
|
||||||
|
} else {
|
||||||
|
this.metrics.summary.failed_checks++;
|
||||||
|
this.log(`${checkName}: FAIL - Version mismatch (expected ${expectedVersion}, found ${actualVersion})`, 'fail');
|
||||||
|
}
|
||||||
|
|
||||||
|
return versionMatches;
|
||||||
|
} catch (error) {
|
||||||
|
this.metrics.summary.failed_checks++;
|
||||||
|
this.metrics.checks[checkName] = {
|
||||||
|
type: 'version_check',
|
||||||
|
status: 'fail',
|
||||||
|
path: filePath,
|
||||||
|
message: `Error reading file: ${error.message}`
|
||||||
|
};
|
||||||
|
this.log(`${checkName}: FAIL - Error reading file: ${error.message}`, 'fail');
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
checkSkillHealth() {
|
||||||
|
this.log('Checking skill health...', 'info');
|
||||||
|
const skillsDir = path.join(AGENTS_DIR, 'skills');
|
||||||
|
|
||||||
|
if (!fs.existsSync(skillsDir)) {
|
||||||
|
this.log('Skills directory not found', 'fail');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const skillDirs = fs.readdirSync(skillsDir).filter(item => {
|
||||||
|
const itemPath = path.join(skillsDir, item);
|
||||||
|
return fs.statSync(itemPath).isDirectory();
|
||||||
|
});
|
||||||
|
|
||||||
|
this.metrics.checks['skill_count'] = {
|
||||||
|
type: 'skill_count',
|
||||||
|
status: skillDirs.length >= 20 ? 'pass' : 'warn',
|
||||||
|
count: skillDirs.length,
|
||||||
|
expected: 20,
|
||||||
|
message: `Found ${skillDirs.length} skills (expected at least 20)`
|
||||||
|
};
|
||||||
|
|
||||||
|
if (skillDirs.length >= 20) {
|
||||||
|
this.metrics.summary.passed_checks++;
|
||||||
|
this.log(`Skill count: PASS - Found ${skillDirs.length} skills`, 'pass');
|
||||||
|
} else {
|
||||||
|
this.metrics.summary.warnings++;
|
||||||
|
this.log(`Skill count: WARN - Only ${skillDirs.length} skills found (expected at least 20)`, 'warn');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check individual skills
|
||||||
|
let healthySkills = 0;
|
||||||
|
skillDirs.forEach(skillDir => {
|
||||||
|
const skillPath = path.join(skillsDir, skillDir);
|
||||||
|
const skillMdPath = path.join(skillPath, 'SKILL.md');
|
||||||
|
|
||||||
|
if (fs.existsSync(skillMdPath)) {
|
||||||
|
try {
|
||||||
|
const content = fs.readFileSync(skillMdPath, 'utf8');
|
||||||
|
const hasName = content.includes('name:');
|
||||||
|
const hasDescription = content.includes('description:');
|
||||||
|
const hasVersion = content.includes('version:');
|
||||||
|
const hasRole = content.includes('## Role');
|
||||||
|
const hasTask = content.includes('## Task');
|
||||||
|
|
||||||
|
const isHealthy = hasName && hasDescription && hasVersion && hasRole && hasTask;
|
||||||
|
if (isHealthy) healthySkills++;
|
||||||
|
|
||||||
|
this.metrics.checks[`skill_${skillDir}_health`] = {
|
||||||
|
type: 'skill_health',
|
||||||
|
status: isHealthy ? 'pass' : 'fail',
|
||||||
|
skill: skillDir,
|
||||||
|
has_name: hasName,
|
||||||
|
has_description: hasDescription,
|
||||||
|
has_version: hasVersion,
|
||||||
|
has_role: hasRole,
|
||||||
|
has_task: hasTask,
|
||||||
|
message: isHealthy ? 'Skill is healthy' : 'Skill has missing sections'
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
this.metrics.checks[`skill_${skillDir}_health`] = {
|
||||||
|
type: 'skill_health',
|
||||||
|
status: 'fail',
|
||||||
|
skill: skillDir,
|
||||||
|
message: `Error reading skill: ${error.message}`
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
this.metrics.summary.total_checks++;
|
||||||
|
if (healthySkills === skillDirs.length) {
|
||||||
|
this.metrics.summary.passed_checks++;
|
||||||
|
this.log(`Individual skills: PASS - All ${healthySkills} skills are healthy`, 'pass');
|
||||||
|
} else {
|
||||||
|
this.metrics.summary.failed_checks++;
|
||||||
|
this.log(`Individual skills: FAIL - Only ${healthySkills}/${skillDirs.length} skills are healthy`, 'fail');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
checkWorkflowHealth() {
|
||||||
|
this.log('Checking workflow health...', 'info');
|
||||||
|
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
|
||||||
|
|
||||||
|
if (!fs.existsSync(workflowsDir)) {
|
||||||
|
this.log('Workflows directory not found', 'fail');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const workflowFiles = fs.readdirSync(workflowsDir).filter(file => file.endsWith('.md'));
|
||||||
|
|
||||||
|
this.metrics.checks['workflow_count'] = {
|
||||||
|
type: 'workflow_count',
|
||||||
|
status: workflowFiles.length >= 20 ? 'pass' : 'warn',
|
||||||
|
count: workflowFiles.length,
|
||||||
|
expected: 20,
|
||||||
|
message: `Found ${workflowFiles.length} workflows (expected at least 20)`
|
||||||
|
};
|
||||||
|
|
||||||
|
if (workflowFiles.length >= 20) {
|
||||||
|
this.metrics.summary.passed_checks++;
|
||||||
|
this.log(`Workflow count: PASS - Found ${workflowFiles.length} workflows`, 'pass');
|
||||||
|
} else {
|
||||||
|
this.metrics.summary.warnings++;
|
||||||
|
this.log(`Workflow count: WARN - Only ${workflowFiles.length} workflows found (expected at least 20)`, 'warn');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
calculateOverallHealth() {
|
||||||
|
const { total_checks, passed_checks, failed_checks, warnings } = this.metrics.summary;
|
||||||
|
|
||||||
|
if (failed_checks === 0) {
|
||||||
|
this.metrics.summary.overall_health = warnings === 0 ? 'excellent' : 'good';
|
||||||
|
} else if (failed_checks <= total_checks * 0.1) {
|
||||||
|
this.metrics.summary.overall_health = 'fair';
|
||||||
|
} else {
|
||||||
|
this.metrics.summary.overall_health = 'poor';
|
||||||
|
}
|
||||||
|
|
||||||
|
this.log(`Overall health: ${this.metrics.summary.overall_health}`, 'info');
|
||||||
|
}
|
||||||
|
|
||||||
|
generateReport() {
|
||||||
|
const report = {
|
||||||
|
...this.metrics,
|
||||||
|
duration: new Date() - this.startTime,
|
||||||
|
environment: {
|
||||||
|
node_version: process.version,
|
||||||
|
platform: process.platform,
|
||||||
|
agents_dir: AGENTS_DIR
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
fs.writeFileSync(HEALTH_REPORT_PATH, JSON.stringify(report, null, 2));
|
||||||
|
this.log(`Health report saved to: ${HEALTH_REPORT_PATH}`, 'info');
|
||||||
|
|
||||||
|
return report;
|
||||||
|
}
|
||||||
|
|
||||||
|
async runFullHealthCheck() {
|
||||||
|
this.log('Starting comprehensive health check...', 'info');
|
||||||
|
this.log(`Base directory: ${BASE_DIR}`, 'info');
|
||||||
|
|
||||||
|
// Core directory checks
|
||||||
|
this.checkDirectoryExists(AGENTS_DIR, 'agents_directory');
|
||||||
|
this.checkDirectoryExists(path.join(AGENTS_DIR, 'skills'), 'skills_directory');
|
||||||
|
this.checkDirectoryExists(path.join(AGENTS_DIR, 'scripts'), 'scripts_directory');
|
||||||
|
this.checkDirectoryExists(path.join(AGENTS_DIR, 'rules'), 'rules_directory');
|
||||||
|
this.checkDirectoryExists(path.join(BASE_DIR, '.windsurf', 'workflows'), 'workflows_directory');
|
||||||
|
|
||||||
|
// Core file checks
|
||||||
|
this.checkFileExists(path.join(AGENTS_DIR, 'README.md'), 'readme_file');
|
||||||
|
this.checkFileExists(path.join(AGENTS_DIR, 'skills', 'VERSION'), 'skills_version_file');
|
||||||
|
this.checkFileExists(path.join(AGENTS_DIR, 'skills', 'skills.md'), 'skills_documentation');
|
||||||
|
|
||||||
|
// Version consistency checks
|
||||||
|
this.checkFileVersion(path.join(AGENTS_DIR, 'README.md'), '1.8.6', 'readme_version');
|
||||||
|
this.checkFileVersion(path.join(AGENTS_DIR, 'skills', 'VERSION'), '1.8.6', 'skills_version_file_version');
|
||||||
|
this.checkFileVersion(path.join(AGENTS_DIR, 'skills', 'skills.md'), '1.8.6', 'skills_documentation_version');
|
||||||
|
this.checkFileVersion(path.join(AGENTS_DIR, 'rules', '00-project-context.md'), '1.8.6', 'project_context_version');
|
||||||
|
|
||||||
|
// Script availability checks
|
||||||
|
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'validate-versions.sh'), 'bash_version_script');
|
||||||
|
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'audit-skills.sh'), 'bash_audit_script');
|
||||||
|
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'sync-workflows.sh'), 'bash_sync_script');
|
||||||
|
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'validate-versions.ps1'), 'powershell_version_script');
|
||||||
|
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'audit-skills.ps1'), 'powershell_audit_script');
|
||||||
|
|
||||||
|
// Detailed health checks
|
||||||
|
this.checkSkillHealth();
|
||||||
|
this.checkWorkflowHealth();
|
||||||
|
|
||||||
|
// Calculate overall health
|
||||||
|
this.calculateOverallHealth();
|
||||||
|
|
||||||
|
// Generate report
|
||||||
|
const report = this.generateReport();
|
||||||
|
|
||||||
|
// Summary
|
||||||
|
this.log('=== Health Check Summary ===', 'info');
|
||||||
|
this.log(`Total checks: ${this.metrics.summary.total_checks}`, 'info');
|
||||||
|
this.log(`Passed: ${this.metrics.summary.passed_checks}`, 'pass');
|
||||||
|
this.log(`Failed: ${this.metrics.summary.failed_checks}`, this.metrics.summary.failed_checks > 0 ? 'fail' : 'info');
|
||||||
|
this.log(`Warnings: ${this.metrics.summary.warnings}`, 'warn');
|
||||||
|
this.log(`Overall health: ${this.metrics.summary.overall_health}`, 'info');
|
||||||
|
this.log(`Duration: ${new Date() - this.startTime}ms`, 'info');
|
||||||
|
|
||||||
|
return report;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CLI interface
|
||||||
|
async function main() {
|
||||||
|
const monitor = new HealthMonitor();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const report = await monitor.runFullHealthCheck();
|
||||||
|
process.exit(report.summary.failed_checks > 0 ? 1 : 0);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Health check failed:', error);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Export for use in other modules
|
||||||
|
module.exports = { HealthMonitor };
|
||||||
|
|
||||||
|
// Run if called directly
|
||||||
|
if (require.main === module) {
|
||||||
|
main();
|
||||||
|
}
|
||||||
@@ -0,0 +1,494 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
/**
|
||||||
|
* performance-monitor.js - Performance monitoring for .agents skills
|
||||||
|
* Part of LCBP3-DMS Phase 3 enhancements
|
||||||
|
*/
|
||||||
|
|
||||||
|
const fs = require('fs');
|
||||||
|
const path = require('path');
|
||||||
|
const { performance } = require('perf_hooks');
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
const BASE_DIR = path.resolve(__dirname, '../..');
|
||||||
|
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
|
||||||
|
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
|
||||||
|
const PERFORMANCE_LOG_PATH = path.join(AGENTS_DIR, 'logs', 'performance.log');
|
||||||
|
const PERFORMANCE_REPORT_PATH = path.join(AGENTS_DIR, 'reports', 'performance-report.json');
|
||||||
|
|
||||||
|
// Ensure directories exist
|
||||||
|
[ path.dirname(PERFORMANCE_LOG_PATH), path.dirname(PERFORMANCE_REPORT_PATH) ].forEach(dir => {
|
||||||
|
if (!fs.existsSync(dir)) {
|
||||||
|
fs.mkdirSync(dir, { recursive: true });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Performance monitoring class
|
||||||
|
class PerformanceMonitor {
|
||||||
|
constructor() {
|
||||||
|
this.startTime = performance.now();
|
||||||
|
this.metrics = {
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
duration: 0,
|
||||||
|
skill_metrics: {},
|
||||||
|
workflow_metrics: {},
|
||||||
|
system_metrics: {},
|
||||||
|
summary: {
|
||||||
|
total_skills_analyzed: 0,
|
||||||
|
total_workflows_analyzed: 0,
|
||||||
|
average_skill_size: 0,
|
||||||
|
average_workflow_size: 0,
|
||||||
|
performance_score: 0,
|
||||||
|
recommendations: []
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
log(message, level = 'info') {
|
||||||
|
const timestamp = new Date().toISOString();
|
||||||
|
const logEntry = `[${timestamp}] [${level.toUpperCase()}] ${message}\n`;
|
||||||
|
|
||||||
|
// Console output with colors
|
||||||
|
const colors = {
|
||||||
|
info: '\x1b[36m', // Cyan
|
||||||
|
good: '\x1b[32m', // Green
|
||||||
|
warn: '\x1b[33m', // Yellow
|
||||||
|
poor: '\x1b[31m', // Red
|
||||||
|
reset: '\x1b[0m'
|
||||||
|
};
|
||||||
|
|
||||||
|
const color = colors[level] || colors.info;
|
||||||
|
console.log(`${color}${logEntry.trim()}${colors.reset}`);
|
||||||
|
|
||||||
|
// File logging
|
||||||
|
fs.appendFileSync(PERFORMANCE_LOG_PATH, logEntry);
|
||||||
|
}
|
||||||
|
|
||||||
|
analyzeSkillPerformance(skillPath, skillName) {
|
||||||
|
const skillMdPath = path.join(skillPath, 'SKILL.md');
|
||||||
|
|
||||||
|
if (!fs.existsSync(skillMdPath)) {
|
||||||
|
this.log(`Skipping ${skillName} - SKILL.md not found`, 'warn');
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
const startTime = performance.now();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const stats = fs.statSync(skillMdPath);
|
||||||
|
const content = fs.readFileSync(skillMdPath, 'utf8');
|
||||||
|
|
||||||
|
// Basic metrics
|
||||||
|
const fileSizeKB = stats.size / 1024;
|
||||||
|
const lineCount = content.split('\n').length;
|
||||||
|
const wordCount = content.split(/\s+/).filter(word => word.length > 0).length;
|
||||||
|
const charCount = content.length;
|
||||||
|
|
||||||
|
// Content complexity metrics
|
||||||
|
const sectionCount = (content.match(/^#+\s/gm) || []).length;
|
||||||
|
const codeBlockCount = (content.match(/```[\s\S]*?```/g) || []).length;
|
||||||
|
const listCount = (content.match(/^[-*+]\s/gm) || []).length;
|
||||||
|
|
||||||
|
// Front matter analysis
|
||||||
|
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
|
||||||
|
const frontMatterSize = frontMatterMatch ? frontMatterMatch[1].length : 0;
|
||||||
|
const hasFrontMatter = frontMatterMatch !== null;
|
||||||
|
|
||||||
|
// Readability metrics
|
||||||
|
const sentences = content.split(/[.!?]+/).filter(s => s.trim().length > 0);
|
||||||
|
const avgWordsPerSentence = sentences.length > 0 ? wordCount / sentences.length : 0;
|
||||||
|
const avgCharsPerWord = wordCount > 0 ? charCount / wordCount : 0;
|
||||||
|
|
||||||
|
// Performance score calculation
|
||||||
|
let performanceScore = 100;
|
||||||
|
|
||||||
|
// Size penalties
|
||||||
|
if (fileSizeKB > 50) performanceScore -= 10;
|
||||||
|
if (fileSizeKB > 100) performanceScore -= 20;
|
||||||
|
|
||||||
|
// Content quality bonuses
|
||||||
|
if (hasFrontMatter) performanceScore += 5;
|
||||||
|
if (sectionCount >= 3) performanceScore += 5;
|
||||||
|
if (codeBlockCount > 0) performanceScore += 5;
|
||||||
|
|
||||||
|
// Readability penalties
|
||||||
|
if (avgWordsPerSentence > 25) performanceScore -= 5;
|
||||||
|
if (avgWordsPerSentence > 35) performanceScore -= 10;
|
||||||
|
|
||||||
|
const analysisTime = performance.now() - startTime;
|
||||||
|
|
||||||
|
const skillMetrics = {
|
||||||
|
skill_name: skillName,
|
||||||
|
file_path: skillMdPath,
|
||||||
|
file_size_kb: Math.round(fileSizeKB * 100) / 100,
|
||||||
|
line_count: lineCount,
|
||||||
|
word_count: wordCount,
|
||||||
|
char_count: charCount,
|
||||||
|
section_count: sectionCount,
|
||||||
|
code_block_count: codeBlockCount,
|
||||||
|
list_count: listCount,
|
||||||
|
front_matter_size: frontMatterSize,
|
||||||
|
has_front_matter: hasFrontMatter,
|
||||||
|
avg_words_per_sentence: Math.round(avgWordsPerSentence * 100) / 100,
|
||||||
|
avg_chars_per_word: Math.round(avgCharsPerWord * 100) / 100,
|
||||||
|
performance_score: Math.max(0, Math.min(100, performanceScore)),
|
||||||
|
analysis_time_ms: Math.round(analysisTime * 100) / 100,
|
||||||
|
last_modified: stats.mtime.toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
this.metrics.skill_metrics[skillName] = skillMetrics;
|
||||||
|
|
||||||
|
// Log performance assessment
|
||||||
|
if (performanceScore >= 80) {
|
||||||
|
this.log(`${skillName}: GOOD performance (score: ${performanceScore})`, 'good');
|
||||||
|
} else if (performanceScore >= 60) {
|
||||||
|
this.log(`${skillName}: OK performance (score: ${performanceScore})`, 'info');
|
||||||
|
} else {
|
||||||
|
this.log(`${skillName}: POOR performance (score: ${performanceScore})`, 'poor');
|
||||||
|
}
|
||||||
|
|
||||||
|
return skillMetrics;
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
this.log(`Error analyzing ${skillName}: ${error.message}`, 'warn');
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
analyzeWorkflowPerformance(workflowPath, workflowName) {
|
||||||
|
const startTime = performance.now();
|
||||||
|
|
||||||
|
if (!fs.existsSync(workflowPath)) {
|
||||||
|
this.log(`Skipping workflow ${workflowName} - file not found`, 'warn');
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const stats = fs.statSync(workflowPath);
|
||||||
|
const content = fs.readFileSync(workflowPath, 'utf8');
|
||||||
|
|
||||||
|
// Basic metrics
|
||||||
|
const fileSizeKB = stats.size / 1024;
|
||||||
|
const lineCount = content.split('\n').length;
|
||||||
|
const wordCount = content.split(/\s+/).filter(word => word.length > 0).length;
|
||||||
|
|
||||||
|
// Workflow-specific metrics
|
||||||
|
const stepCount = (content.match(/^\d+\./gm) || []).length;
|
||||||
|
const codeBlockCount = (content.match(/```[\s\S]*?```/g) || []).length;
|
||||||
|
const skillReferences = (content.match(/@speckit-\w+/g) || []).length;
|
||||||
|
|
||||||
|
// Performance score calculation
|
||||||
|
let performanceScore = 100;
|
||||||
|
|
||||||
|
// Size penalties
|
||||||
|
if (fileSizeKB > 20) performanceScore -= 10;
|
||||||
|
if (fileSizeKB > 50) performanceScore -= 20;
|
||||||
|
|
||||||
|
// Content quality bonuses
|
||||||
|
if (stepCount > 0) performanceScore += 10;
|
||||||
|
if (codeBlockCount > 0) performanceScore += 5;
|
||||||
|
if (skillReferences > 0) performanceScore += 5;
|
||||||
|
|
||||||
|
const analysisTime = performance.now() - startTime;
|
||||||
|
|
||||||
|
const workflowMetrics = {
|
||||||
|
workflow_name: workflowName,
|
||||||
|
file_path: workflowPath,
|
||||||
|
file_size_kb: Math.round(fileSizeKB * 100) / 100,
|
||||||
|
line_count: lineCount,
|
||||||
|
word_count: wordCount,
|
||||||
|
step_count: stepCount,
|
||||||
|
code_block_count: codeBlockCount,
|
||||||
|
skill_references: skillReferences,
|
||||||
|
performance_score: Math.max(0, Math.min(100, performanceScore)),
|
||||||
|
analysis_time_ms: Math.round(analysisTime * 100) / 100,
|
||||||
|
last_modified: stats.mtime.toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
this.metrics.workflow_metrics[workflowName] = workflowMetrics;
|
||||||
|
|
||||||
|
// Log performance assessment
|
||||||
|
if (performanceScore >= 80) {
|
||||||
|
this.log(`${workflowName}: GOOD performance (score: ${performanceScore})`, 'good');
|
||||||
|
} else if (performanceScore >= 60) {
|
||||||
|
this.log(`${workflowName}: OK performance (score: ${performanceScore})`, 'info');
|
||||||
|
} else {
|
||||||
|
this.log(`${workflowName}: POOR performance (score: ${performanceScore})`, 'poor');
|
||||||
|
}
|
||||||
|
|
||||||
|
return workflowMetrics;
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
this.log(`Error analyzing workflow ${workflowName}: ${error.message}`, 'warn');
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
analyzeSystemMetrics() {
|
||||||
|
this.log('Analyzing system metrics...', 'info');
|
||||||
|
|
||||||
|
// Directory sizes
|
||||||
|
const agentsSize = this.getDirectorySize(AGENTS_DIR);
|
||||||
|
const skillsSize = this.getDirectorySize(SKILLS_DIR);
|
||||||
|
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
|
||||||
|
const workflowsSize = fs.existsSync(workflowsDir) ? this.getDirectorySize(workflowsDir) : 0;
|
||||||
|
|
||||||
|
// File counts
|
||||||
|
const totalFiles = this.countFiles(AGENTS_DIR);
|
||||||
|
const skillFiles = this.countFiles(SKILLS_DIR);
|
||||||
|
const workflowFiles = fs.existsSync(workflowsDir) ? this.countFiles(workflowsDir) : 0;
|
||||||
|
|
||||||
|
this.metrics.system_metrics = {
|
||||||
|
agents_directory_size_kb: Math.round(agentsSize / 1024),
|
||||||
|
skills_directory_size_kb: Math.round(skillsSize / 1024),
|
||||||
|
workflows_directory_size_kb: Math.round(workflowsSize / 1024),
|
||||||
|
total_files: totalFiles,
|
||||||
|
skill_files: skillFiles,
|
||||||
|
workflow_files: workflowFiles,
|
||||||
|
analysis_timestamp: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
this.log(`System: ${totalFiles} files, ${Math.round(agentsSize / 1024)}KB total`, 'info');
|
||||||
|
}
|
||||||
|
|
||||||
|
getDirectorySize(dirPath) {
|
||||||
|
let totalSize = 0;
|
||||||
|
|
||||||
|
if (!fs.existsSync(dirPath)) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
const items = fs.readdirSync(dirPath);
|
||||||
|
|
||||||
|
for (const item of items) {
|
||||||
|
const itemPath = path.join(dirPath, item);
|
||||||
|
const stats = fs.statSync(itemPath);
|
||||||
|
|
||||||
|
if (stats.isDirectory()) {
|
||||||
|
totalSize += this.getDirectorySize(itemPath);
|
||||||
|
} else {
|
||||||
|
totalSize += stats.size;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return totalSize;
|
||||||
|
}
|
||||||
|
|
||||||
|
countFiles(dirPath) {
|
||||||
|
let fileCount = 0;
|
||||||
|
|
||||||
|
if (!fs.existsSync(dirPath)) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
const items = fs.readdirSync(dirPath);
|
||||||
|
|
||||||
|
for (const item of items) {
|
||||||
|
const itemPath = path.join(dirPath, item);
|
||||||
|
const stats = fs.statSync(itemPath);
|
||||||
|
|
||||||
|
if (stats.isDirectory()) {
|
||||||
|
fileCount += this.countFiles(itemPath);
|
||||||
|
} else {
|
||||||
|
fileCount++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return fileCount;
|
||||||
|
}
|
||||||
|
|
||||||
|
generateRecommendations() {
|
||||||
|
const recommendations = [];
|
||||||
|
const { skill_metrics, workflow_metrics, system_metrics } = this.metrics;
|
||||||
|
|
||||||
|
// Analyze skill performance
|
||||||
|
const skillScores = Object.values(skill_metrics).map(m => m.performance_score);
|
||||||
|
const avgSkillScore = skillScores.length > 0 ? skillScores.reduce((a, b) => a + b, 0) / skillScores.length : 0;
|
||||||
|
|
||||||
|
if (avgSkillScore < 70) {
|
||||||
|
recommendations.push({
|
||||||
|
type: 'performance',
|
||||||
|
priority: 'high',
|
||||||
|
message: 'Average skill performance is below optimal. Consider optimizing skill documentation.',
|
||||||
|
details: `Average score: ${Math.round(avgSkillScore)}`
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for oversized files
|
||||||
|
const largeSkills = Object.values(skill_metrics).filter(m => m.file_size_kb > 50);
|
||||||
|
if (largeSkills.length > 0) {
|
||||||
|
recommendations.push({
|
||||||
|
type: 'size',
|
||||||
|
priority: 'medium',
|
||||||
|
message: `${largeSkills.length} skills have large file sizes (>50KB). Consider breaking down complex skills.`,
|
||||||
|
details: largeSkills.map(s => `${s.skill_name} (${s.file_size_kb}KB)`).join(', ')
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for missing front matter
|
||||||
|
const skillsWithoutFrontMatter = Object.values(skill_metrics).filter(m => !m.has_front_matter);
|
||||||
|
if (skillsWithoutFrontMatter.length > 0) {
|
||||||
|
recommendations.push({
|
||||||
|
type: 'structure',
|
||||||
|
priority: 'high',
|
||||||
|
message: `${skillsWithoutFrontMatter.length} skills missing front matter. Add proper YAML front matter.`,
|
||||||
|
details: skillsWithoutFrontMatter.map(s => s.skill_name).join(', ')
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Analyze workflow performance
|
||||||
|
const workflowScores = Object.values(workflow_metrics).map(m => m.performance_score);
|
||||||
|
const avgWorkflowScore = workflowScores.length > 0 ? workflowScores.reduce((a, b) => a + b, 0) / workflowScores.length : 0;
|
||||||
|
|
||||||
|
if (avgWorkflowScore < 70) {
|
||||||
|
recommendations.push({
|
||||||
|
type: 'performance',
|
||||||
|
priority: 'medium',
|
||||||
|
message: 'Average workflow performance could be improved. Add more detailed steps and examples.',
|
||||||
|
details: `Average score: ${Math.round(avgWorkflowScore)}`
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// System recommendations
|
||||||
|
if (system_metrics.agents_directory_size_kb > 1000) {
|
||||||
|
recommendations.push({
|
||||||
|
type: 'maintenance',
|
||||||
|
priority: 'low',
|
||||||
|
message: '.agents directory is growing large. Consider archiving old logs and reports.',
|
||||||
|
details: `Current size: ${system_metrics.agents_directory_size_kb}KB`
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
this.metrics.summary.recommendations = recommendations;
|
||||||
|
|
||||||
|
// Log recommendations
|
||||||
|
if (recommendations.length > 0) {
|
||||||
|
this.log('Performance Recommendations:', 'info');
|
||||||
|
recommendations.forEach((rec, index) => {
|
||||||
|
const priority = rec.priority === 'high' ? 'HIGH' : rec.priority === 'medium' ? 'MED' : 'LOW';
|
||||||
|
this.log(` ${index + 1}. [${priority}] ${rec.message}`, 'warn');
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
this.log('No performance issues detected - system is optimized!', 'good');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
calculateOverallPerformance() {
|
||||||
|
const { skill_metrics, workflow_metrics } = this.metrics;
|
||||||
|
|
||||||
|
const skillScores = Object.values(skill_metrics).map(m => m.performance_score);
|
||||||
|
const workflowScores = Object.values(workflow_metrics).map(m => m.performance_score);
|
||||||
|
|
||||||
|
const avgSkillScore = skillScores.length > 0 ? skillScores.reduce((a, b) => a + b, 0) / skillScores.length : 100;
|
||||||
|
const avgWorkflowScore = workflowScores.length > 0 ? workflowScores.reduce((a, b) => a + b, 0) / workflowScores.length : 100;
|
||||||
|
|
||||||
|
// Weight skills more heavily than workflows
|
||||||
|
const overallScore = (avgSkillScore * 0.7) + (avgWorkflowScore * 0.3);
|
||||||
|
|
||||||
|
this.metrics.summary.performance_score = Math.round(overallScore);
|
||||||
|
this.metrics.summary.average_skill_size = skillScores.length > 0
|
||||||
|
? Math.round(Object.values(skill_metrics).reduce((sum, m) => sum + m.file_size_kb, 0) / skillScores.length * 100) / 100
|
||||||
|
: 0;
|
||||||
|
this.metrics.summary.average_workflow_size = workflowScores.length > 0
|
||||||
|
? Math.round(Object.values(workflow_metrics).reduce((sum, m) => sum + m.file_size_kb, 0) / workflowScores.length * 100) / 100
|
||||||
|
: 0;
|
||||||
|
this.metrics.summary.total_skills_analyzed = skillScores.length;
|
||||||
|
this.metrics.summary.total_workflows_analyzed = workflowScores.length;
|
||||||
|
}
|
||||||
|
|
||||||
|
generateReport() {
|
||||||
|
this.metrics.duration = performance.now() - this.startTime;
|
||||||
|
|
||||||
|
const report = {
|
||||||
|
...this.metrics,
|
||||||
|
generated_at: new Date().toISOString(),
|
||||||
|
environment: {
|
||||||
|
node_version: process.version,
|
||||||
|
platform: process.platform,
|
||||||
|
memory_usage: process.memoryUsage()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
fs.writeFileSync(PERFORMANCE_REPORT_PATH, JSON.stringify(report, null, 2));
|
||||||
|
this.log(`Performance report saved to: ${PERFORMANCE_REPORT_PATH}`, 'info');
|
||||||
|
|
||||||
|
return report;
|
||||||
|
}
|
||||||
|
|
||||||
|
async runPerformanceAnalysis() {
|
||||||
|
this.log('Starting performance analysis...', 'info');
|
||||||
|
this.log(`Base directory: ${BASE_DIR}`, 'info');
|
||||||
|
|
||||||
|
// Analyze skills
|
||||||
|
this.log('Analyzing skill performance...', 'info');
|
||||||
|
if (fs.existsSync(SKILLS_DIR)) {
|
||||||
|
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
|
||||||
|
const itemPath = path.join(SKILLS_DIR, item);
|
||||||
|
return fs.statSync(itemPath).isDirectory();
|
||||||
|
});
|
||||||
|
|
||||||
|
for (const skillDir of skillDirs) {
|
||||||
|
const skillPath = path.join(SKILLS_DIR, skillDir);
|
||||||
|
this.analyzeSkillPerformance(skillPath, skillDir);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Analyze workflows
|
||||||
|
this.log('Analyzing workflow performance...', 'info');
|
||||||
|
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
|
||||||
|
if (fs.existsSync(workflowsDir)) {
|
||||||
|
const workflowFiles = fs.readdirSync(workflowsDir).filter(file => file.endsWith('.md'));
|
||||||
|
|
||||||
|
for (const workflowFile of workflowFiles) {
|
||||||
|
const workflowPath = path.join(workflowsDir, workflowFile);
|
||||||
|
const workflowName = workflowFile.replace('.md', '');
|
||||||
|
this.analyzeWorkflowPerformance(workflowPath, workflowName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// System metrics
|
||||||
|
this.analyzeSystemMetrics();
|
||||||
|
|
||||||
|
// Calculate overall performance
|
||||||
|
this.calculateOverallPerformance();
|
||||||
|
|
||||||
|
// Generate recommendations
|
||||||
|
this.generateRecommendations();
|
||||||
|
|
||||||
|
// Generate report
|
||||||
|
const report = this.generateReport();
|
||||||
|
|
||||||
|
// Summary
|
||||||
|
this.log('=== Performance Analysis Summary ===', 'info');
|
||||||
|
this.log(`Overall performance score: ${this.metrics.summary.performance_score}/100`, 'info');
|
||||||
|
this.log(`Skills analyzed: ${this.metrics.summary.total_skills_analyzed}`, 'info');
|
||||||
|
this.log(`Workflows analyzed: ${this.metrics.summary.total_workflows_analyzed}`, 'info');
|
||||||
|
this.log(`Average skill size: ${this.metrics.summary.average_skill_size}KB`, 'info');
|
||||||
|
this.log(`Average workflow size: ${this.metrics.summary.average_workflow_size}KB`, 'info');
|
||||||
|
this.log(`Analysis duration: ${Math.round(this.metrics.duration)}ms`, 'info');
|
||||||
|
this.log(`Recommendations: ${this.metrics.summary.recommendations.length}`, 'info');
|
||||||
|
|
||||||
|
return report;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CLI interface
|
||||||
|
async function main() {
|
||||||
|
const monitor = new PerformanceMonitor();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const report = await monitor.runPerformanceAnalysis();
|
||||||
|
process.exit(report.summary.performance_score < 60 ? 1 : 0);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Performance analysis failed:', error);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Export for use in other modules
|
||||||
|
module.exports = { PerformanceMonitor };
|
||||||
|
|
||||||
|
// Run if called directly
|
||||||
|
if (require.main === module) {
|
||||||
|
main();
|
||||||
|
}
|
||||||
@@ -0,0 +1,198 @@
|
|||||||
|
# audit-skills.ps1 - Verify skill completeness and health
|
||||||
|
# Part of LCBP3-DMS Phase 2 improvements
|
||||||
|
|
||||||
|
param(
|
||||||
|
[string]$BaseDir = (Split-Path -Parent (Split-Path -Parent (Split-Path -Parent $PSScriptRoot)))
|
||||||
|
)
|
||||||
|
|
||||||
|
# Map to ConsoleColor enum (Write-Host expects enum, not ANSI strings)
|
||||||
|
$Colors = @{
|
||||||
|
Red = 'Red'
|
||||||
|
Green = 'Green'
|
||||||
|
Yellow = 'Yellow'
|
||||||
|
Blue = 'Blue'
|
||||||
|
NoColor = 'Gray'
|
||||||
|
}
|
||||||
|
|
||||||
|
$AgentsDir = Join-Path $BaseDir ".agents"
|
||||||
|
$SkillsDir = Join-Path $AgentsDir "skills"
|
||||||
|
|
||||||
|
Write-Host "=== Skills Health Audit ===" -ForegroundColor Cyan
|
||||||
|
Write-Host "Base directory: $BaseDir"
|
||||||
|
Write-Host ""
|
||||||
|
|
||||||
|
# Function to check if skill has required files
|
||||||
|
function Test-SkillHealth {
|
||||||
|
param(
|
||||||
|
[string]$SkillDir
|
||||||
|
)
|
||||||
|
|
||||||
|
$skillName = Split-Path $SkillDir -Leaf
|
||||||
|
$issues = 0
|
||||||
|
|
||||||
|
# Check for SKILL.md
|
||||||
|
$skillFile = Join-Path $SkillDir "SKILL.md"
|
||||||
|
if (Test-Path $skillFile) {
|
||||||
|
Write-Host " OK: $skillName/SKILL.md" -ForegroundColor $Colors.Green
|
||||||
|
} else {
|
||||||
|
Write-Host " MISSING: $skillName/SKILL.md" -ForegroundColor $Colors.Red
|
||||||
|
$issues++
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check for templates directory (optional)
|
||||||
|
$templatesDir = Join-Path $SkillDir "templates"
|
||||||
|
if (Test-Path $templatesDir) {
|
||||||
|
$templateCount = (Get-ChildItem -Path $templatesDir -Filter "*.md" -File | Measure-Object).Count
|
||||||
|
if ($templateCount -gt 0) {
|
||||||
|
Write-Host " OK: $skillName/templates ($templateCount files)" -ForegroundColor $Colors.Green
|
||||||
|
} else {
|
||||||
|
Write-Host " EMPTY: $skillName/templates (no files)" -ForegroundColor $Colors.Yellow
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check SKILL.md content if exists
|
||||||
|
if (Test-Path $skillFile) {
|
||||||
|
$content = Get-Content $skillFile -Raw
|
||||||
|
|
||||||
|
# Check for required front matter fields
|
||||||
|
$requiredFields = @('name', 'description', 'version')
|
||||||
|
foreach ($field in $requiredFields) {
|
||||||
|
$pattern = "(?m)^${field}:"
|
||||||
|
if ($content -match $pattern) {
|
||||||
|
Write-Host " FIELD: $field" -ForegroundColor $Colors.Green
|
||||||
|
} else {
|
||||||
|
Write-Host " MISSING FIELD: $field" -ForegroundColor $Colors.Red
|
||||||
|
$issues++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check for LCBP3 context reference (speckit-* skills)
|
||||||
|
if ($skillName -like 'speckit-*') {
|
||||||
|
if ($content -match '_LCBP3-CONTEXT\.md') {
|
||||||
|
Write-Host " CONTEXT: LCBP3 appendix referenced" -ForegroundColor $Colors.Green
|
||||||
|
} else {
|
||||||
|
Write-Host " MISSING: LCBP3 context reference" -ForegroundColor $Colors.Yellow
|
||||||
|
$issues++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return $issues
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to get skill version from SKILL.md
|
||||||
|
function Get-SkillVersion {
|
||||||
|
param(
|
||||||
|
[string]$SkillFile
|
||||||
|
)
|
||||||
|
|
||||||
|
if (Test-Path $SkillFile) {
|
||||||
|
try {
|
||||||
|
$content = Get-Content $SkillFile -Raw
|
||||||
|
if ($content -match "(?m)^version:\s*['""]?([0-9]+\.[0-9]+\.[0-9]+)['""]?") {
|
||||||
|
return $matches[1].Trim()
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
return "error"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return "no_file"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check skills directory
|
||||||
|
if (-not (Test-Path $SkillsDir)) {
|
||||||
|
Write-Host "ERROR: Skills directory not found" -ForegroundColor $Colors.Red
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
Write-Host "Scanning skills directory: $SkillsDir"
|
||||||
|
Write-Host ""
|
||||||
|
|
||||||
|
# Get all skill directories
|
||||||
|
$skillDirs = Get-ChildItem -Path $SkillsDir -Directory | Sort-Object Name
|
||||||
|
|
||||||
|
Write-Host "Found $($skillDirs.Count) skill directories"
|
||||||
|
Write-Host ""
|
||||||
|
|
||||||
|
# Audit each skill
|
||||||
|
$totalIssues = 0
|
||||||
|
$skillSummary = @()
|
||||||
|
|
||||||
|
foreach ($skillDir in $skillDirs) {
|
||||||
|
$skillName = $skillDir.Name
|
||||||
|
Write-Host "Auditing: $skillName"
|
||||||
|
Write-Host "------------------------"
|
||||||
|
|
||||||
|
$issues = Test-SkillHealth -SkillDir $skillDir.FullName
|
||||||
|
|
||||||
|
$skillVersion = Get-SkillVersion -SkillFile (Join-Path $skillDir.FullName "SKILL.md")
|
||||||
|
$skillSummary += @{
|
||||||
|
Name = $skillName
|
||||||
|
Issues = $issues
|
||||||
|
Version = $skillVersion
|
||||||
|
}
|
||||||
|
|
||||||
|
$totalIssues += $issues
|
||||||
|
Write-Host ""
|
||||||
|
}
|
||||||
|
|
||||||
|
# Summary report
|
||||||
|
Write-Host "=== Skills Audit Summary ===" -ForegroundColor Cyan
|
||||||
|
Write-Host ""
|
||||||
|
|
||||||
|
Write-Host "Skill Status:"
|
||||||
|
Write-Host "-----------"
|
||||||
|
foreach ($summary in $skillSummary) {
|
||||||
|
if ($summary.Issues -eq 0) {
|
||||||
|
Write-Host " HEALTHY: $($summary.Name) (v$($summary.Version))" -ForegroundColor $Colors.Green
|
||||||
|
} else {
|
||||||
|
Write-Host " ISSUES: $($summary.Name) (v$($summary.Version)) - $($summary.Issues) issues" -ForegroundColor $Colors.Red
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Write-Host ""
|
||||||
|
|
||||||
|
# Check skills.md version consistency
|
||||||
|
$skillsVersionFile = Join-Path $SkillsDir "VERSION"
|
||||||
|
if (Test-Path $skillsVersionFile) {
|
||||||
|
$content = Get-Content $skillsVersionFile -Raw
|
||||||
|
if ($content -match "^version:\s*(.+)") {
|
||||||
|
$globalVersion = $matches[1].Trim()
|
||||||
|
Write-Host "Global skills version: v$globalVersion"
|
||||||
|
Write-Host ""
|
||||||
|
|
||||||
|
# Check for version mismatches
|
||||||
|
Write-Host "Version Consistency Check:"
|
||||||
|
Write-Host "------------------------"
|
||||||
|
$versionMismatches = 0
|
||||||
|
|
||||||
|
foreach ($summary in $skillSummary) {
|
||||||
|
if ($summary.Version -ne "unknown" -and $summary.Version -ne "no_file" -and $summary.Version -ne $globalVersion) {
|
||||||
|
Write-Host " MISMATCH: $($summary.Name) is v$($summary.Version), global is v$globalVersion" -ForegroundColor $Colors.Yellow
|
||||||
|
$versionMismatches++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($versionMismatches -eq 0) {
|
||||||
|
Write-Host " All skills match global version" -ForegroundColor $Colors.Green
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Write-Host ""
|
||||||
|
|
||||||
|
# Overall health
|
||||||
|
if ($totalIssues -eq 0) {
|
||||||
|
Write-Host "=== SUCCESS: All skills healthy ===" -ForegroundColor $Colors.Green
|
||||||
|
Write-Host "Total skills: $($skillDirs.Count)"
|
||||||
|
exit 0
|
||||||
|
} else {
|
||||||
|
Write-Host "=== ISSUES FOUND: $totalIssues total issues ===" -ForegroundColor $Colors.Red
|
||||||
|
Write-Host ""
|
||||||
|
Write-Host "Recommendations:"
|
||||||
|
Write-Host "1. Fix missing SKILL.md files"
|
||||||
|
Write-Host "2. Add required front matter fields"
|
||||||
|
Write-Host "3. Ensure Role and Task sections exist"
|
||||||
|
Write-Host "4. Align skill versions with global version"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
@@ -0,0 +1,110 @@
|
|||||||
|
# validate-versions.ps1 - Check version consistency across .agents files
|
||||||
|
# Part of LCBP3-DMS Phase 2 improvements
|
||||||
|
|
||||||
|
param(
|
||||||
|
[string]$BaseDir = (Split-Path -Parent (Split-Path -Parent (Split-Path -Parent $PSScriptRoot))),
|
||||||
|
[string]$ExpectedVersion = "1.8.9"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Map to ConsoleColor enum (Write-Host expects enum, not ANSI)
|
||||||
|
$Colors = @{
|
||||||
|
Red = 'Red'
|
||||||
|
Green = 'Green'
|
||||||
|
Yellow = 'Yellow'
|
||||||
|
NoColor = 'Gray'
|
||||||
|
}
|
||||||
|
|
||||||
|
$AgentsDir = Join-Path $BaseDir ".agents"
|
||||||
|
|
||||||
|
Write-Host "=== .agents Version Validation ===" -ForegroundColor Cyan
|
||||||
|
Write-Host "Base directory: $BaseDir"
|
||||||
|
Write-Host "Expected version: $ExpectedVersion"
|
||||||
|
Write-Host ""
|
||||||
|
|
||||||
|
# Function to extract version from file
|
||||||
|
function Get-VersionFromFile {
|
||||||
|
param(
|
||||||
|
[string]$FilePath,
|
||||||
|
[string]$Pattern
|
||||||
|
)
|
||||||
|
|
||||||
|
if (Test-Path $FilePath) {
|
||||||
|
try {
|
||||||
|
$content = Get-Content $FilePath -Raw
|
||||||
|
if ($content -match $Pattern) {
|
||||||
|
return $matches[1]
|
||||||
|
} else {
|
||||||
|
return "NOT_FOUND"
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
return "ERROR"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return "FILE_NOT_FOUND"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Files to check
|
||||||
|
$FilesToCheck = @{
|
||||||
|
(Join-Path $AgentsDir "skills\VERSION") = "version: ([0-9]+\.[0-9]+\.[0-9]+)"
|
||||||
|
(Join-Path $AgentsDir "skills\skills.md") = "V([0-9]+\.[0-9]+\.[0-9]+)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Track issues
|
||||||
|
$Issues = 0
|
||||||
|
|
||||||
|
Write-Host "Checking version consistency..."
|
||||||
|
Write-Host ""
|
||||||
|
|
||||||
|
foreach ($file in $FilesToCheck.Keys) {
|
||||||
|
$pattern = $FilesToCheck[$file]
|
||||||
|
$relativePath = $file.Replace($BaseDir + "\", "")
|
||||||
|
|
||||||
|
$version = Get-VersionFromFile -FilePath $file -Pattern $pattern
|
||||||
|
|
||||||
|
if ($version -eq "NOT_FOUND" -or $version -eq "FILE_NOT_FOUND") {
|
||||||
|
Write-Host " ERROR: $relativePath - Version not found" -ForegroundColor $Colors.Red
|
||||||
|
$Issues++
|
||||||
|
} elseif ($version -ne $ExpectedVersion) {
|
||||||
|
Write-Host " ERROR: $relativePath - Found v$version, expected v$ExpectedVersion" -ForegroundColor $Colors.Red
|
||||||
|
$Issues++
|
||||||
|
} else {
|
||||||
|
Write-Host " OK: $relativePath - v$version" -ForegroundColor $Colors.Green
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Write-Host ""
|
||||||
|
|
||||||
|
# Check for version mismatches in skill files
|
||||||
|
Write-Host "Checking skill file versions..."
|
||||||
|
$SkillsVersionFile = Join-Path $AgentsDir "skills\VERSION"
|
||||||
|
if (Test-Path $SkillsVersionFile) {
|
||||||
|
$skillsVersion = Get-VersionFromFile -FilePath $SkillsVersionFile -Pattern "version: ([0-9]+\.[0-9]+\.[0-9]+)"
|
||||||
|
Write-Host "Skills version file: v$skillsVersion"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check workflow versions (in .windsurf\workflows)
|
||||||
|
$WorkflowsDir = Join-Path $BaseDir ".windsurf\workflows"
|
||||||
|
if (Test-Path $WorkflowsDir) {
|
||||||
|
Write-Host "Checking workflow files..."
|
||||||
|
$workflowCount = (Get-ChildItem -Path $WorkflowsDir -Filter "*.md" -File | Measure-Object).Count
|
||||||
|
Write-Host " OK: Found $workflowCount workflow files" -ForegroundColor $Colors.Green
|
||||||
|
} else {
|
||||||
|
Write-Host " WARNING: Workflows directory not found at $WorkflowsDir" -ForegroundColor $Colors.Yellow
|
||||||
|
}
|
||||||
|
|
||||||
|
Write-Host ""
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
if ($Issues -eq 0) {
|
||||||
|
Write-Host "=== SUCCESS: All versions consistent ===" -ForegroundColor $Colors.Green
|
||||||
|
exit 0
|
||||||
|
} else {
|
||||||
|
Write-Host "=== FAILED: $Issues version issues found ===" -ForegroundColor $Colors.Red
|
||||||
|
Write-Host ""
|
||||||
|
Write-Host "To fix version issues:"
|
||||||
|
Write-Host "1. Update files to use v$ExpectedVersion"
|
||||||
|
Write-Host "2. Ensure LCBP3 project version matches"
|
||||||
|
Write-Host "3. Run this script again to verify"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
@@ -0,0 +1,109 @@
|
|||||||
|
# `.agents/skills/` — LCBP3 Agent Skill Pack
|
||||||
|
|
||||||
|
**Version:** 1.8.9 | **Last Updated:** 2026-04-22 | **Total Skills:** 20
|
||||||
|
|
||||||
|
Agent skills for AI-assisted development in **Windsurf IDE** (and compatible agents: Codex CLI, opencode, Amp, Antigravity, AGENTS.md-aware tools).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📂 Layout
|
||||||
|
|
||||||
|
```
|
||||||
|
.agents/skills/
|
||||||
|
├── VERSION # Single source of truth for skill-pack version
|
||||||
|
├── skills.md # Overview + dependency matrix + health monitoring
|
||||||
|
├── _LCBP3-CONTEXT.md # Shared LCBP3 context injected into every speckit-* skill
|
||||||
|
├── README.md # (this file)
|
||||||
|
├── nestjs-best-practices/ # Backend rules (40 rules across 10 categories)
|
||||||
|
├── next-best-practices/ # Frontend rules (Next.js 15+)
|
||||||
|
└── speckit-*/ # 18 workflow skills (spec → plan → tasks → implement → …)
|
||||||
|
```
|
||||||
|
|
||||||
|
Each skill directory contains:
|
||||||
|
|
||||||
|
- `SKILL.md` — frontmatter (`name`, `description`, `version: 1.8.9`, `scope`, `depends-on`, `handoffs`) + instructions
|
||||||
|
- `templates/` _(optional)_ — artifact templates (spec/plan/tasks/checklist)
|
||||||
|
- `rules/` _(nestjs only)_ — individual rule files grouped by prefix (`arch-`, `security-`, `db-`, etc.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚀 How Windsurf Invokes These Skills
|
||||||
|
|
||||||
|
Windsurf exposes two entry points:
|
||||||
|
|
||||||
|
1. **Skill tool** — Windsurf discovers skills by scanning `.agents/skills/*/SKILL.md` frontmatter. Skills marked `user-invocable: false` are used silently by Cascade.
|
||||||
|
2. **Slash commands** — `.windsurf/workflows/*.md` wraps each skill as a slash command (e.g. `/04-speckit.plan`). The workflow file is short; the heavy lifting is delegated to the skill via `skill` tool.
|
||||||
|
|
||||||
|
Both paths end up executing the same `SKILL.md` instructions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🧭 Typical Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
/01-speckit.constitution → AGENTS.md / product vision
|
||||||
|
/02-speckit.specify → specs/feat-XXX/spec.md
|
||||||
|
/03-speckit.clarify → updates spec.md (up to 5 targeted questions)
|
||||||
|
/04-speckit.plan → specs/feat-XXX/plan.md + data-model.md + contracts/
|
||||||
|
/05-speckit.tasks → specs/feat-XXX/tasks.md
|
||||||
|
/06-speckit.analyze → cross-artifact consistency report (read-only)
|
||||||
|
/07-speckit.implement → executes tasks with Ironclad Protocols (Blast Radius + Strangler + TDD)
|
||||||
|
/08-speckit.checker → pnpm lint / typecheck / markdown-lint
|
||||||
|
/09-speckit.tester → pnpm test + coverage gates (Backend 70%+, Business Logic 80%+)
|
||||||
|
/10-speckit.reviewer → code review with Tier 1/2/3 classification
|
||||||
|
/11-speckit.validate → UAT / acceptance-criteria.md
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `/00-speckit.all` to run specify → clarify → plan → tasks → analyze in one go.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🛠️ Helper Scripts
|
||||||
|
|
||||||
|
From repo root:
|
||||||
|
|
||||||
|
| Script | Purpose |
|
||||||
|
| --- | --- |
|
||||||
|
| `./.agents/scripts/bash/check-prerequisites.sh --json` | Emit `FEATURE_DIR` + `AVAILABLE_DOCS` for a feature branch |
|
||||||
|
| `./.agents/scripts/bash/setup-plan.sh --json` | Emit `FEATURE_SPEC`, `IMPL_PLAN`, `SPECS_DIR`, `BRANCH` |
|
||||||
|
| `./.agents/scripts/bash/update-agent-context.sh windsurf` | Append tech entries to `AGENTS.md` |
|
||||||
|
| `./.agents/scripts/bash/audit-skills.sh` | Validate all `SKILL.md` frontmatter + presence |
|
||||||
|
| `./.agents/scripts/bash/validate-versions.sh` | Version consistency check |
|
||||||
|
| `./.agents/scripts/bash/sync-workflows.sh` | Verify every skill has a `.windsurf/workflows/*.md` wrapper |
|
||||||
|
|
||||||
|
All scripts mirror to `.agents/scripts/powershell/*.ps1` for Windows.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚠️ Tier 1 Non-Negotiables (auto-enforced)
|
||||||
|
|
||||||
|
- ADR-019 — `publicId` exposed directly; no `parseInt` / `Number` / `+` on UUID; no `id ?? ''` fallback
|
||||||
|
- ADR-009 — edit SQL schema directly, no TypeORM migrations
|
||||||
|
- ADR-016 — JWT + CASL on every mutation; `Idempotency-Key` required; ClamAV two-phase upload
|
||||||
|
- ADR-018 — AI via DMS API only (Ollama on Admin Desktop; no direct DB/storage)
|
||||||
|
- ADR-007 — layered error classification (Validation / Business / System)
|
||||||
|
- Zero `any`, zero `console.log` (use `Logger`)
|
||||||
|
|
||||||
|
See [`_LCBP3-CONTEXT.md`](./_LCBP3-CONTEXT.md) for the complete list.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🤝 Extending
|
||||||
|
|
||||||
|
To add a new skill:
|
||||||
|
|
||||||
|
1. Create `NAME/SKILL.md` with frontmatter: `name`, `description`, `version: 1.8.9`, `scope`, `depends-on`.
|
||||||
|
2. Append an LCBP3 context reference pointing to `_LCBP3-CONTEXT.md`.
|
||||||
|
3. Wrap with `.windsurf/workflows/NAME.md` so it becomes a slash command.
|
||||||
|
4. Update [`skills.md`](./skills.md) dependency matrix.
|
||||||
|
5. Run `./.agents/scripts/bash/audit-skills.sh` → must pass.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📚 References
|
||||||
|
|
||||||
|
- **Canonical rules:** `AGENTS.md` (repo root)
|
||||||
|
- **Product vision:** `specs/00-Overview/00-03-product-vision.md`
|
||||||
|
- **ADRs:** `specs/06-Decision-Records/`
|
||||||
|
- **Engineering guidelines:** `specs/05-Engineering-Guidelines/`
|
||||||
|
- **Contributing:** `CONTRIBUTING.md`
|
||||||
+17
-2
@@ -1,10 +1,25 @@
|
|||||||
# Speckit Skills Version
|
# Speckit Skills Version
|
||||||
|
|
||||||
version: 1.1.0
|
version: 1.8.9
|
||||||
release_date: 2026-01-24
|
release_date: 2026-04-22
|
||||||
|
|
||||||
## Changelog
|
## Changelog
|
||||||
|
|
||||||
|
### 1.8.9 (2026-04-22)
|
||||||
|
- Full LCBP3-native rebuild of `.agents/skills/`
|
||||||
|
- Fixed ADR-019 drift (removed `@Expose({ name: 'id' })` and `id ?? ''` fallback patterns)
|
||||||
|
- Replaced all dead references (`GEMINI.md` → `AGENTS.md`, v1.7.0 → v1.8.0 schema, `.specify/memory/` → `AGENTS.md`)
|
||||||
|
- Added real helper scripts under `.agents/scripts/bash/` and `.agents/scripts/powershell/`
|
||||||
|
- Added ADR-007/008/020/021 coverage
|
||||||
|
- New rules: workflow-engine, file-two-phase-upload, ai-boundary, i18n, file-upload, workflow-banner
|
||||||
|
- Standardized frontmatter across all 20 skills (`version: 1.8.9`)
|
||||||
|
|
||||||
|
### 1.8.6 (2026-04-14)
|
||||||
|
- Version alignment with LCBP3-DMS v1.8.6
|
||||||
|
- Complete skill implementations for all 20 skills
|
||||||
|
- Enhanced security and audit capabilities
|
||||||
|
- Production-ready deployment status
|
||||||
|
|
||||||
### 1.1.0 (2026-01-24)
|
### 1.1.0 (2026-01-24)
|
||||||
- New QA skills: tester, reviewer, checker
|
- New QA skills: tester, reviewer, checker
|
||||||
- tester: Execute tests, measure coverage, report results
|
- tester: Execute tests, measure coverage, report results
|
||||||
|
|||||||
@@ -0,0 +1,91 @@
|
|||||||
|
# 🧭 LCBP3-DMS Context Appendix (Shared)
|
||||||
|
|
||||||
|
> This file is included/referenced by every Speckit skill as the authoritative project context.
|
||||||
|
> Skills **must** load it (or the files it links to) before generating any artifact.
|
||||||
|
|
||||||
|
**Project:** NAP-DMS (LCBP3) — Laem Chabang Port Phase 3 Document Management System
|
||||||
|
**Stack:** NestJS 11 + Next.js 16 + TypeScript + MariaDB 11.8 + Redis + BullMQ + Elasticsearch + Ollama (on-prem AI)
|
||||||
|
**Version:** 1.8.9 (2026-04-18)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📌 Canonical Rule Sources (read in this order)
|
||||||
|
|
||||||
|
1. **`AGENTS.md`** (repo root) — primary rule file for AI agents; supersedes legacy `GEMINI.md`.
|
||||||
|
2. **`specs/06-Decision-Records/`** — architectural decisions (22 ADRs); ADR priority > Engineering Guidelines.
|
||||||
|
3. **`specs/05-Engineering-Guidelines/`** — backend/frontend/testing/i18n/git patterns.
|
||||||
|
4. **`specs/00-Overview/00-02-glossary.md`** — domain terminology (Correspondence / RFA / Transmittal / Circulation).
|
||||||
|
5. **`specs/00-Overview/00-03-product-vision.md`** — project constitution (Vision, Strategic Pillars, Guardrails).
|
||||||
|
6. **`CONTRIBUTING.md`** — spec writing standards, PR template, review levels.
|
||||||
|
7. **`README.md`** — technology stack + getting started.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔴 Tier 1 Non-Negotiables
|
||||||
|
|
||||||
|
- **ADR-019 UUID:** `publicId: string` exposed directly — **no** `@Expose({ name: 'id' })` rename; **no** `parseInt`/`Number`/`+` on UUID; **no** `id ?? ''` fallback in frontend.
|
||||||
|
- **ADR-009:** No TypeORM migrations — edit `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` or add a `deltas/*.sql` file.
|
||||||
|
- **ADR-016 Security:** JWT + CASL 4-Level RBAC; `@UseGuards(JwtAuthGuard, CaslAbilityGuard)` on every mutation controller; `ThrottlerGuard` on auth; bcrypt 12 rounds; `Idempotency-Key` required on POST/PUT/PATCH.
|
||||||
|
- **ADR-002 Document Numbering:** Redis Redlock + TypeORM `@VersionColumn` (double-lock). Never use application-side counter alone.
|
||||||
|
- **ADR-008 Notifications:** BullMQ queue — never inline email/notification in a request thread.
|
||||||
|
- **ADR-018 AI Boundary:** Ollama on Admin Desktop only; AI → DMS API → DB (never direct DB/storage). Human-in-the-loop validation required.
|
||||||
|
- **ADR-007 Error Handling:** Layered (Validation / Business / System); `BusinessException` hierarchy; user-friendly `userMessage` + `recoveryAction`; technical stack only in logs.
|
||||||
|
- **TypeScript Strict:** Zero `any`, zero `console.log` (use NestJS `Logger`).
|
||||||
|
- **i18n:** No hardcoded Thai/English strings in components — use i18n keys (see `05-08-i18n-guidelines.md`).
|
||||||
|
- **File Upload:** Two-phase (Temp → ClamAV → Permanent), whitelist `PDF/DWG/DOCX/XLSX/ZIP`, max 50MB, `StorageService` only.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🏷️ Domain Glossary (reject generic terms)
|
||||||
|
|
||||||
|
| ✅ Use | ❌ Don't Use |
|
||||||
|
| --- | --- |
|
||||||
|
| Correspondence | Letter, Communication, Document |
|
||||||
|
| RFA | Approval Request, Submit for Approval |
|
||||||
|
| Transmittal | Delivery Note, Cover Letter |
|
||||||
|
| Circulation | Distribution, Routing |
|
||||||
|
| Shop Drawing | Construction Drawing |
|
||||||
|
| Contract Drawing | Design Drawing, Blueprint |
|
||||||
|
| Workflow Engine | Approval Flow, Process Engine |
|
||||||
|
| Document Numbering | Document ID, Auto Number |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📁 Key Files for Generating / Validating Artifacts
|
||||||
|
|
||||||
|
| When you need... | Read |
|
||||||
|
| --- | --- |
|
||||||
|
| A new feature spec | `.agents/skills/speckit-specify/templates/spec-template.md` + `specs/01-Requirements/01-06-edge-cases-and-rules.md` |
|
||||||
|
| A plan | `.agents/skills/speckit-plan/templates/plan-template.md` + relevant ADRs |
|
||||||
|
| Task breakdown | `.agents/skills/speckit-tasks/templates/tasks-template.md` + existing patterns in `specs/08-Tasks/` |
|
||||||
|
| Acceptance criteria / UAT | `specs/01-Requirements/01-05-acceptance-criteria.md` |
|
||||||
|
| Schema / table definition | `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` + `03-01-data-dictionary.md` |
|
||||||
|
| RBAC / permissions | `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql` + `01-02-01-rbac-matrix.md` |
|
||||||
|
| Release / hotfix | `specs/04-Infrastructure-OPS/04-08-release-management-policy.md` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🛠️ Helper Scripts (real paths in this repo)
|
||||||
|
|
||||||
|
- `./.agents/scripts/bash/check-prerequisites.sh` / `powershell/*.ps1`
|
||||||
|
- `./.agents/scripts/bash/setup-plan.sh`
|
||||||
|
- `./.agents/scripts/bash/update-agent-context.sh windsurf`
|
||||||
|
- `./.agents/scripts/bash/audit-skills.sh`
|
||||||
|
- `./.agents/scripts/bash/validate-versions.sh`
|
||||||
|
- `./.agents/scripts/bash/sync-workflows.sh`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ Commit Checklist (applied automatically by speckit-implement)
|
||||||
|
|
||||||
|
- [ ] UUID pattern verified (no `parseInt` / `Number` / `+` on UUID, no `id ?? ''` fallback)
|
||||||
|
- [ ] No `any`, no `console.log` in committed code
|
||||||
|
- [ ] Business comments in Thai, code identifiers in English
|
||||||
|
- [ ] Schema changes via SQL directly (not migration)
|
||||||
|
- [ ] Test coverage meets targets (Backend 70%+, Business Logic 80%+)
|
||||||
|
- [ ] Relevant ADRs referenced (007/008/009/016/018/019/020/021)
|
||||||
|
- [ ] Domain glossary terms used correctly
|
||||||
|
- [ ] Error handling: `Logger` + `HttpException` / `BusinessException`
|
||||||
|
- [ ] i18n keys used (no hardcode text)
|
||||||
|
- [ ] Cache invalidation when data mutated
|
||||||
|
- [ ] OWASP Top 10 review passed
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -1,10 +1,12 @@
|
|||||||
---
|
---
|
||||||
name: nestjs-best-practices
|
name: nestjs-best-practices
|
||||||
description: NestJS best practices and architecture patterns for building production-ready applications. This skill should be used when writing, reviewing, or refactoring NestJS code to ensure proper patterns for modules, dependency injection, security, and performance.
|
description: NestJS best practices and architecture patterns for building production-ready LCBP3-DMS backend code. Enforces ADR-009 (no TypeORM migrations), ADR-019 (hybrid UUID), ADR-016 (security), ADR-007 (error handling), ADR-008 (BullMQ), ADR-001/002 (workflow + numbering), ADR-018/020 (AI boundary), and ADR-021 (workflow context).
|
||||||
|
version: 1.8.9
|
||||||
|
scope: backend
|
||||||
|
user-invocable: false
|
||||||
license: MIT
|
license: MIT
|
||||||
metadata:
|
metadata:
|
||||||
author: Kadajett
|
upstream: 'Kadajett/nestjs-best-practices v1.1.0 (forked + LCBP3-aligned)'
|
||||||
version: '1.1.0'
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# NestJS Best Practices
|
# NestJS Best Practices
|
||||||
@@ -110,6 +112,13 @@ Reference these guidelines when:
|
|||||||
- `devops-use-logging` - Structured logging
|
- `devops-use-logging` - Structured logging
|
||||||
- `devops-graceful-shutdown` - Zero-downtime deployments
|
- `devops-graceful-shutdown` - Zero-downtime deployments
|
||||||
|
|
||||||
|
### 11. LCBP3-Specific (CRITICAL — Project Overrides)
|
||||||
|
|
||||||
|
- `db-no-typeorm-migrations` — **CRITICAL** ADR-009: edit SQL directly
|
||||||
|
- `lcbp3-workflow-engine` — **CRITICAL** ADR-001/002/021: DSL state machine + double-lock numbering + workflow context
|
||||||
|
- `security-file-two-phase-upload` — **CRITICAL** ADR-016: Upload → Temp → ClamAV → Commit
|
||||||
|
- `lcbp3-ai-boundary` — **CRITICAL** ADR-018/020: Ollama on-prem only, human-in-the-loop
|
||||||
|
|
||||||
## NAP-DMS Project-Specific Rules (MUST FOLLOW)
|
## NAP-DMS Project-Specific Rules (MUST FOLLOW)
|
||||||
|
|
||||||
These rules override general NestJS best practices for the NAP-DMS project:
|
These rules override general NestJS best practices for the NAP-DMS project:
|
||||||
@@ -120,21 +129,62 @@ These rules override general NestJS best practices for the NAP-DMS project:
|
|||||||
- แก้ไข schema โดยตรงที่: `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
|
- แก้ไข schema โดยตรงที่: `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
|
||||||
- ใช้ n8n workflow สำหรับ data migration ถ้าจำเป็น
|
- ใช้ n8n workflow สำหรับ data migration ถ้าจำเป็น
|
||||||
|
|
||||||
### ADR-019: Hybrid Identifier Strategy (CRITICAL)
|
### ADR-019: Hybrid Identifier Strategy (CRITICAL — March 2026 Pattern)
|
||||||
|
|
||||||
|
> **Updated pattern:** `UuidBaseEntity` exposes `publicId` **directly**. ห้ามใช้ `@Expose({ name: 'id' })` — API จะคืน `publicId` เป็น field name ตรงๆ.
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
|
// ✅ CORRECT — ใช้ UuidBaseEntity
|
||||||
@Entity()
|
@Entity()
|
||||||
export class Project {
|
export class Project extends UuidBaseEntity {
|
||||||
@PrimaryGeneratedColumn()
|
// publicId (string UUIDv7) + id (INT, @Exclude) สืบทอดจาก UuidBaseEntity
|
||||||
@Exclude() // ห้ามส่งออกทาง API
|
// API response → { publicId: "019505a1-7c3e-7000-8000-abc123..." }
|
||||||
id: number; // INT AUTO_INCREMENT - internal only
|
|
||||||
|
|
||||||
@Column({ type: 'uuid' })
|
@Column()
|
||||||
@Expose({ name: 'id' }) // ส่งออกเป็น 'id' ทาง API
|
projectCode: string;
|
||||||
publicId: string; // UUIDv7 - public API identifier
|
|
||||||
|
@Column()
|
||||||
|
projectName: string;
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG — pattern เก่า ห้ามใช้
|
||||||
|
@Entity()
|
||||||
|
export class OldProject {
|
||||||
|
@PrimaryGeneratedColumn()
|
||||||
|
@Exclude()
|
||||||
|
id: number;
|
||||||
|
|
||||||
|
@Column({ type: 'uuid' })
|
||||||
|
@Expose({ name: 'id' }) // ❌ อย่า rename publicId เป็น 'id'
|
||||||
|
publicId: string;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**DTO Input (รับ UUID จาก Frontend):**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export class CreateContractDto {
|
||||||
|
@IsUUID('7')
|
||||||
|
projectUuid: string; // รับ UUID string จาก client
|
||||||
|
}
|
||||||
|
|
||||||
|
// Controller resolves UUID → INT internally
|
||||||
|
@Post()
|
||||||
|
async create(@Body() dto: CreateContractDto) {
|
||||||
|
const projectId = await this.projectService.resolveInternalId(dto.projectUuid);
|
||||||
|
return this.contractService.create({ ...dto, projectId });
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**ห้ามเด็ดขาด (CI Blocker):**
|
||||||
|
|
||||||
|
- ❌ `parseInt(projectPublicId)` — "019505…" → 19 (silently wrong)
|
||||||
|
- ❌ `Number(publicId)` / `+publicId` — NaN
|
||||||
|
- ❌ `@Expose({ name: 'id' })` บน `publicId` (pattern เก่า)
|
||||||
|
- ❌ Expose INT `id` ใน API response (ต้อง `@Exclude()` เสมอ)
|
||||||
|
|
||||||
### Two-Phase File Upload
|
### Two-Phase File Upload
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
|
|||||||
@@ -0,0 +1,24 @@
|
|||||||
|
{
|
||||||
|
"version": "1.8.9",
|
||||||
|
"organization": "**NAP-DMS / LCBP3** — Laem Chabang Port Phase 3 Document Management System",
|
||||||
|
"date": "2026-04-22",
|
||||||
|
"abstract": "Comprehensive NestJS best-practices guide compiled for the LCBP3-DMS backend. Contains 40+ rules across 11 categories (10 general + 1 project-specific), prioritized by impact. Forked from Kadajett/nestjs-best-practices (v1.1.0) and aligned to LCBP3 ADRs: ADR-001 (workflow engine), ADR-002 (document numbering), ADR-007 (error handling), ADR-008 (notifications/BullMQ), ADR-009 (no TypeORM migrations), ADR-016 (security), ADR-018/020 (AI boundary), ADR-019 (hybrid UUID identifier — March 2026 pattern), and ADR-021 (workflow context).\n\nThis document is the single, consolidated reference used by Cascade and other AI coding agents when writing, reviewing, or refactoring backend code in this repository. All LCBP3-specific overrides live in section 11.",
|
||||||
|
"references": [
|
||||||
|
"[AGENTS.md (root)](../../../AGENTS.md) — canonical AI agent rules",
|
||||||
|
"[CONTRIBUTING.md](../../../CONTRIBUTING.md) — spec authoring + PR process",
|
||||||
|
"[ADR-001 Unified Workflow Engine](../../../specs/06-Decision-Records/ADR-001-unified-workflow-engine.md)",
|
||||||
|
"[ADR-002 Document Numbering Strategy](../../../specs/06-Decision-Records/ADR-002-document-numbering-strategy.md)",
|
||||||
|
"[ADR-007 Error Handling Strategy](../../../specs/06-Decision-Records/ADR-007-error-handling-strategy.md)",
|
||||||
|
"[ADR-008 Email/Notification Strategy](../../../specs/06-Decision-Records/ADR-008-email-notification-strategy.md)",
|
||||||
|
"[ADR-009 Database Migration Strategy](../../../specs/06-Decision-Records/ADR-009-database-migration-strategy.md)",
|
||||||
|
"[ADR-016 Security & Authentication](../../../specs/06-Decision-Records/ADR-016-security-authentication.md)",
|
||||||
|
"[ADR-018 AI Boundary](../../../specs/06-Decision-Records/ADR-018-ai-boundary.md)",
|
||||||
|
"[ADR-019 Hybrid Identifier Strategy](../../../specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md)",
|
||||||
|
"[ADR-020 AI Intelligence Integration](../../../specs/06-Decision-Records/ADR-020-ai-intelligence-integration.md)",
|
||||||
|
"[ADR-021 Workflow Context](../../../specs/06-Decision-Records/ADR-021-workflow-context.md)",
|
||||||
|
"[Backend Engineering Guidelines](../../../specs/05-Engineering-Guidelines/05-02-backend-guidelines.md)",
|
||||||
|
"[Schema — v1.8.0 Tables](../../../specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql)",
|
||||||
|
"[Data Dictionary](../../../specs/03-Data-and-Storage/03-01-data-dictionary.md)",
|
||||||
|
"Upstream: [Kadajett/nestjs-best-practices](https://github.com/Kadajett/nestjs-best-practices) v1.1.0"
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -5,20 +5,22 @@ impactDescription: Use INT PK internally + UUID for public API per project ADR-0
|
|||||||
tags: database, uuid, identifier, adr-019, api-design, typeorm
|
tags: database, uuid, identifier, adr-019, api-design, typeorm
|
||||||
---
|
---
|
||||||
|
|
||||||
## Hybrid Identifier Strategy (ADR-019)
|
## Hybrid Identifier Strategy (ADR-019) — March 2026 Pattern
|
||||||
|
|
||||||
**This project follows ADR-019: INT Primary Key (internal) + UUIDv7 (public API)**
|
**This project follows ADR-019: INT Primary Key (internal) + UUIDv7 (public API)**
|
||||||
|
|
||||||
Unlike standard practices that use UUID as the primary key, this project uses a **hybrid approach** optimized for MariaDB performance and API consistency.
|
Unlike standard practices that use UUID as the primary key, this project uses a **hybrid approach** optimized for MariaDB performance and API consistency.
|
||||||
|
|
||||||
|
> **Updated pattern (March 2026):** Entities extend `UuidBaseEntity`. The `publicId` column is exposed **directly** in API responses — ห้ามใช้ `@Expose({ name: 'id' })` เพื่อ rename.
|
||||||
|
|
||||||
### The Strategy
|
### The Strategy
|
||||||
|
|
||||||
| Layer | Field | Type | Usage |
|
| Layer | Field | Type | Usage |
|
||||||
|-------|-------|------|-------|
|
| --------------- | ---------- | ----------------------------------- | ------------------------------------------------- |
|
||||||
| **Database PK** | `id` | `INT AUTO_INCREMENT` | Internal foreign keys only |
|
| **Database PK** | `id` | `INT AUTO_INCREMENT` | Internal foreign keys only (marked `@Exclude()`) |
|
||||||
| **Public API** | `uuid` | `MariaDB UUID` (native) | External references, URLs |
|
| **Public API** | `publicId` | `MariaDB UUID` (native, BINARY(16)) | External references, URLs — exposed as-is |
|
||||||
| **DTO Input** | `xxxUuid` | `string` | Accept UUID in create/update |
|
| **DTO Input** | `xxxUuid` | `string` (UUIDv7) | Accept UUID in create/update DTOs |
|
||||||
| **DTO Output** | `id` | `string` | API returns UUID as `id` via `@Expose` |
|
| **DTO Output** | `publicId` | `string` (UUIDv7) | API returns `publicId` field directly (no rename) |
|
||||||
|
|
||||||
### Why Hybrid IDs?
|
### Why Hybrid IDs?
|
||||||
|
|
||||||
@@ -27,31 +29,51 @@ Unlike standard practices that use UUID as the primary key, this project uses a
|
|||||||
- **Compatibility**: UUID works well with distributed systems and external integrations
|
- **Compatibility**: UUID works well with distributed systems and external integrations
|
||||||
- **MariaDB Native**: Uses MariaDB's native UUID type (stored as BINARY(16), auto-converts to string)
|
- **MariaDB Native**: Uses MariaDB's native UUID type (stored as BINARY(16), auto-converts to string)
|
||||||
|
|
||||||
### Entity Definition
|
### Entity Definition (Current Pattern)
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
import { Entity, PrimaryGeneratedColumn, Column, Index } from 'typeorm';
|
import { Entity, Column } from 'typeorm';
|
||||||
import { Exclude, Expose } from 'class-transformer';
|
import { UuidBaseEntity } from '@/common/entities/uuid-base.entity';
|
||||||
|
|
||||||
@Entity('contracts')
|
@Entity('contracts')
|
||||||
export class Contract {
|
export class Contract extends UuidBaseEntity {
|
||||||
@PrimaryGeneratedColumn()
|
// publicId (string UUIDv7) + id (INT, @Exclude) สืบทอดจาก UuidBaseEntity
|
||||||
@Exclude() // Never expose in API response
|
// API response → { publicId: "019505a1-7c3e-7000-8000-abc123...", contractCode: ..., ... }
|
||||||
id: number; // Internal INT PK - used for FK relationships
|
|
||||||
|
|
||||||
@Column({ type: 'uuid', unique: true })
|
|
||||||
@Expose({ name: 'id' }) // Exposed as 'id' in API
|
|
||||||
uuid: string; // Public UUIDv7 - what API consumers see
|
|
||||||
|
|
||||||
@Column()
|
@Column()
|
||||||
contractCode: string;
|
contractCode: string;
|
||||||
|
|
||||||
@Column()
|
@Column()
|
||||||
contractName: string;
|
contractName: string;
|
||||||
|
|
||||||
|
@Column({ name: 'project_id' })
|
||||||
|
projectId: number; // INT FK — internal, not exposed if marked @Exclude in UuidBaseEntity
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### DTO Pattern (Accept UUID, Resolve to INT)
|
**`UuidBaseEntity` (shared base):**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { PrimaryGeneratedColumn, Column, CreateDateColumn, UpdateDateColumn } from 'typeorm';
|
||||||
|
import { Exclude } from 'class-transformer';
|
||||||
|
|
||||||
|
export abstract class UuidBaseEntity {
|
||||||
|
@PrimaryGeneratedColumn()
|
||||||
|
@Exclude() // ❗ CRITICAL: INT id must never leak to API
|
||||||
|
id: number;
|
||||||
|
|
||||||
|
@Column({ type: 'uuid', unique: true, generated: 'uuid' })
|
||||||
|
publicId: string; // UUIDv7, exposed as-is
|
||||||
|
|
||||||
|
@CreateDateColumn()
|
||||||
|
createdAt: Date;
|
||||||
|
|
||||||
|
@UpdateDateColumn()
|
||||||
|
updatedAt: Date;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### DTO Pattern (Accept UUID, Resolve to INT Internally)
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// dto/create-contract.dto.ts
|
// dto/create-contract.dto.ts
|
||||||
@@ -59,7 +81,7 @@ import { IsUUID, IsNotEmpty } from 'class-validator';
|
|||||||
|
|
||||||
export class CreateContractDto {
|
export class CreateContractDto {
|
||||||
@IsNotEmpty()
|
@IsNotEmpty()
|
||||||
@IsUUID('4')
|
@IsUUID('7') // UUIDv7 (MariaDB native)
|
||||||
projectUuid: string; // Accept UUID from client
|
projectUuid: string; // Accept UUID from client
|
||||||
|
|
||||||
@IsNotEmpty()
|
@IsNotEmpty()
|
||||||
@@ -69,48 +91,38 @@ export class CreateContractDto {
|
|||||||
contractName: string;
|
contractName: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
// dto/contract-response.dto.ts
|
// ❌ NO Response DTO with @Expose rename needed.
|
||||||
import { Exclude, Expose } from 'class-transformer';
|
// Entity class_transformer via TransformInterceptor will serialize publicId directly.
|
||||||
|
|
||||||
export class ContractResponseDto {
|
|
||||||
@Expose({ name: 'id' })
|
|
||||||
uuid: string; // Returned as 'id' field in JSON
|
|
||||||
|
|
||||||
contractCode: string;
|
|
||||||
contractName: string;
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Service/Controller Pattern
|
### Service/Controller Pattern
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
@Controller('contracts')
|
@Controller('contracts')
|
||||||
|
@UseGuards(JwtAuthGuard, CaslAbilityGuard)
|
||||||
export class ContractsController {
|
export class ContractsController {
|
||||||
constructor(
|
constructor(
|
||||||
private contractsService: ContractsService,
|
private contractsService: ContractsService,
|
||||||
private uuidResolver: UuidResolver, // Helper to convert UUID → INT
|
private uuidResolver: UuidResolver
|
||||||
) {}
|
) {}
|
||||||
|
|
||||||
@Post()
|
@Post()
|
||||||
async create(@Body() dto: CreateContractDto) {
|
async create(@Body() dto: CreateContractDto) {
|
||||||
// Resolve UUID to INT PK for database operations
|
// Resolve UUID → INT PK for FK relationship
|
||||||
const projectId = await this.uuidResolver.resolveProject(dto.projectUuid);
|
const projectId = await this.uuidResolver.resolveProject(dto.projectUuid);
|
||||||
|
|
||||||
// Create with INT FK
|
|
||||||
const contract = await this.contractsService.create({
|
const contract = await this.contractsService.create({
|
||||||
...dto,
|
...dto,
|
||||||
projectId, // INT for database
|
projectId,
|
||||||
});
|
});
|
||||||
|
|
||||||
// Response automatically transforms via @Expose
|
// Response: TransformInterceptor + @Exclude on id → publicId exposed directly
|
||||||
return contract;
|
return contract;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Get(':id')
|
@Get(':publicId')
|
||||||
async findOne(@Param('id') uuid: string) {
|
async findOne(@Param('publicId', ParseUuidPipe) publicId: string) {
|
||||||
// Controller receives UUID string
|
return this.contractsService.findOneByPublicId(publicId);
|
||||||
// Service handles UUID → INT resolution internally
|
|
||||||
return this.contractsService.findByUuid(uuid);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -124,21 +136,21 @@ export class UuidResolver {
|
|||||||
@InjectRepository(Project)
|
@InjectRepository(Project)
|
||||||
private projectRepo: Repository<Project>,
|
private projectRepo: Repository<Project>,
|
||||||
@InjectRepository(Contract)
|
@InjectRepository(Contract)
|
||||||
private contractRepo: Repository<Contract>,
|
private contractRepo: Repository<Contract>
|
||||||
) {}
|
) {}
|
||||||
|
|
||||||
async resolveProject(uuid: string): Promise<number> {
|
async resolveProject(publicId: string): Promise<number> {
|
||||||
const project = await this.projectRepo.findOne({
|
const project = await this.projectRepo.findOne({
|
||||||
where: { uuid },
|
where: { publicId },
|
||||||
select: ['id'], // Only fetch INT PK
|
select: ['id'], // Only INT PK for FK
|
||||||
});
|
});
|
||||||
if (!project) throw new NotFoundException('Project not found');
|
if (!project) throw new NotFoundException('Project not found');
|
||||||
return project.id;
|
return project.id;
|
||||||
}
|
}
|
||||||
|
|
||||||
async resolveContract(uuid: string): Promise<number> {
|
async resolveContract(publicId: string): Promise<number> {
|
||||||
const contract = await this.contractRepo.findOne({
|
const contract = await this.contractRepo.findOne({
|
||||||
where: { uuid },
|
where: { publicId },
|
||||||
select: ['id'],
|
select: ['id'],
|
||||||
});
|
});
|
||||||
if (!contract) throw new NotFoundException('Contract not found');
|
if (!contract) throw new NotFoundException('Contract not found');
|
||||||
@@ -147,20 +159,20 @@ export class UuidResolver {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### TransformInterceptor (Required)
|
### TransformInterceptor (Required — register ONCE)
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// Must be configured globally to handle @Exclude/@Expose
|
// Register via APP_INTERCEPTOR in CommonModule — ห้ามซ้ำใน main.ts
|
||||||
@Injectable()
|
@Injectable()
|
||||||
export class TransformInterceptor implements NestInterceptor {
|
export class TransformInterceptor implements NestInterceptor {
|
||||||
intercept(context: ExecutionContext, next: CallHandler): Observable<any> {
|
intercept(context: ExecutionContext, next: CallHandler): Observable<any> {
|
||||||
return next.handle().pipe(
|
return next.handle().pipe(
|
||||||
map((data) => instanceToPlain(data)), // Applies class-transformer decorators
|
map((data) => instanceToPlain(data)) // Applies @Exclude / @Expose
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// app.module.ts
|
// common.module.ts
|
||||||
@Module({
|
@Module({
|
||||||
providers: [
|
providers: [
|
||||||
{
|
{
|
||||||
@@ -169,35 +181,37 @@ export class TransformInterceptor implements NestInterceptor {
|
|||||||
},
|
},
|
||||||
],
|
],
|
||||||
})
|
})
|
||||||
export class AppModule {}
|
export class CommonModule {}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> **Warning:** ห้ามเรียก `app.useGlobalInterceptors(new TransformInterceptor())` ใน `main.ts` ซ้ำ — จะทำให้ response double-wrap `{ data: { data: ... } }`.
|
||||||
|
|
||||||
### Critical: NEVER ParseInt on UUID
|
### Critical: NEVER ParseInt on UUID
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// ❌ WRONG - parseInt on UUID gives garbage value
|
// ❌ WRONG - parseInt on UUID gives garbage value
|
||||||
const id = parseInt(projectUuid); // "0195a1b2-..." → 195 (wrong!)
|
const id = parseInt(projectPublicId); // "0195a1b2-..." → 195 (wrong!)
|
||||||
|
|
||||||
// ❌ WRONG - Number() on UUID
|
// ❌ WRONG - Number() on UUID
|
||||||
const id = Number(projectUuid); // NaN
|
const id = Number(projectPublicId); // NaN
|
||||||
|
|
||||||
// ❌ WRONG - Unary plus on UUID
|
// ❌ WRONG - Unary plus on UUID
|
||||||
const id = +projectUuid; // NaN
|
const id = +projectPublicId; // NaN
|
||||||
|
|
||||||
// ✅ CORRECT - Resolve via database lookup
|
// ✅ CORRECT - Resolve via database lookup
|
||||||
const projectId = await uuidResolver.resolveProject(projectUuid);
|
const projectId = await uuidResolver.resolveProject(projectPublicId);
|
||||||
|
|
||||||
// ✅ CORRECT - Use TypeORM find with UUID column
|
// ✅ CORRECT - Use TypeORM find with publicId column
|
||||||
const project = await projectRepo.findOne({ where: { uuid: projectUuid } });
|
const project = await projectRepo.findOne({ where: { publicId: projectPublicId } });
|
||||||
const id = project.id; // Get INT PK from entity
|
const id = project.id; // Get INT PK from entity
|
||||||
```
|
```
|
||||||
|
|
||||||
### Query with UUID (No Resolution Needed)
|
### Query with publicId (No Resolution Needed)
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// Direct UUID lookup in TypeORM
|
// Direct UUID lookup in TypeORM
|
||||||
const project = await this.projectRepo.findOne({
|
const project = await this.projectRepo.findOne({
|
||||||
where: { uuid: projectUuid }, // Query by UUID column
|
where: { publicId: projectPublicId },
|
||||||
});
|
});
|
||||||
|
|
||||||
// Relations use INT FK internally
|
// Relations use INT FK internally
|
||||||
|
|||||||
@@ -0,0 +1,100 @@
|
|||||||
|
---
|
||||||
|
title: No TypeORM Migrations (ADR-009)
|
||||||
|
impact: CRITICAL
|
||||||
|
impactDescription: Edit SQL schema files directly; n8n handles data migration. Do not generate TypeORM migration files.
|
||||||
|
tags: database, schema, migration, adr-009, sql, n8n
|
||||||
|
---
|
||||||
|
|
||||||
|
## No TypeORM Migrations (ADR-009)
|
||||||
|
|
||||||
|
**This project does NOT use TypeORM migration files.**
|
||||||
|
|
||||||
|
All schema changes must be made **directly** in the canonical SQL file:
|
||||||
|
|
||||||
|
- `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
|
||||||
|
|
||||||
|
Delta scripts (for incremental rollout to existing environments) go under:
|
||||||
|
|
||||||
|
- `specs/03-Data-and-Storage/deltas/YYYY-MM-DD-descriptive-name.sql`
|
||||||
|
|
||||||
|
Data migration (e.g., backfilling a new column) is handled by **n8n workflows**, not TypeORM's `QueryRunner`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why No Migrations?
|
||||||
|
|
||||||
|
1. **Single source of truth** — The full SQL schema is always readable as one file. No need to replay a migration chain to understand current state.
|
||||||
|
2. **Review friendly** — Schema diff = git diff on the SQL file. Reviewers see the complete picture.
|
||||||
|
3. **Ops alignment** — DBAs and operators work in SQL, not TypeScript.
|
||||||
|
4. **n8n for data** — Business-meaningful data transforms live in n8n where they can be versioned, retried, and orchestrated with monitoring.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ Workflow for a Schema Change
|
||||||
|
|
||||||
|
1. **Update Data Dictionary** first:
|
||||||
|
- `specs/03-Data-and-Storage/03-01-data-dictionary.md` — add field meaning + business rules.
|
||||||
|
2. **Update the canonical schema**:
|
||||||
|
- Edit `lcbp3-v1.8.0-schema-02-tables.sql` — add/alter column, constraint, index.
|
||||||
|
3. **Add a delta script** (if deploying to existing env):
|
||||||
|
- `specs/03-Data-and-Storage/deltas/2026-04-22-add-rfa-revision-column.sql`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Delta: Add revision column to rfa table
|
||||||
|
ALTER TABLE rfa
|
||||||
|
ADD COLUMN revision INT NOT NULL DEFAULT 1 AFTER status;
|
||||||
|
|
||||||
|
CREATE INDEX idx_rfa_revision ON rfa(revision);
|
||||||
|
```
|
||||||
|
4. **Update the Entity** (`backend/src/.../entities/rfa.entity.ts`):
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
@Column({ type: 'int', default: 1 })
|
||||||
|
revision: number;
|
||||||
|
```
|
||||||
|
5. **If data backfill needed** → create n8n workflow, not TypeScript migration.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ❌ Forbidden
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ❌ DO NOT generate migrations
|
||||||
|
pnpm typeorm migration:generate ./src/migrations/AddRevision
|
||||||
|
|
||||||
|
# ❌ DO NOT run migrations
|
||||||
|
pnpm typeorm migration:run
|
||||||
|
```
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ DO NOT write migration classes
|
||||||
|
export class AddRevision1730000000000 implements MigrationInterface {
|
||||||
|
async up(queryRunner: QueryRunner): Promise<void> { /* ... */ }
|
||||||
|
async down(queryRunner: QueryRunner): Promise<void> { /* ... */ }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ TypeORM Config (runtime only)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ormconfig.ts
|
||||||
|
export default {
|
||||||
|
type: 'mariadb',
|
||||||
|
// ...
|
||||||
|
synchronize: false, // ❗ NEVER true (would auto-sync entity ↔ schema)
|
||||||
|
migrationsRun: false, // ❗ NEVER true
|
||||||
|
// ❌ Do NOT specify `migrations:` entries
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
`synchronize: false` is mandatory because the canonical SQL file is authoritative — TypeORM should never mutate the schema.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Reference
|
||||||
|
|
||||||
|
- [ADR-009 Database Migration Strategy](../../../../specs/06-Decision-Records/ADR-009-database-migration-strategy.md)
|
||||||
|
- [Data Dictionary](../../../../specs/03-Data-and-Storage/03-01-data-dictionary.md)
|
||||||
|
- [Schema Tables](../../../../specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql)
|
||||||
@@ -0,0 +1,157 @@
|
|||||||
|
---
|
||||||
|
title: AI Integration Boundary (ADR-018 / ADR-020)
|
||||||
|
impact: CRITICAL
|
||||||
|
impactDescription: AI runs on Admin Desktop only; AI → DMS API → DB (never direct); human-in-the-loop validation mandatory; full audit trail.
|
||||||
|
tags: ai, ollama, boundary, adr-018, adr-020, privacy, audit
|
||||||
|
---
|
||||||
|
|
||||||
|
## AI Integration Boundary
|
||||||
|
|
||||||
|
LCBP3 uses **on-premises AI only** (Ollama on Admin Desktop) with strict isolation from data layers.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Boundary
|
||||||
|
|
||||||
|
```
|
||||||
|
┌────────────────────────────────────────────────────────────┐
|
||||||
|
│ User Browser (Next.js) │
|
||||||
|
└─────────────────────────┬──────────────────────────────────┘
|
||||||
|
│ (authenticated HTTPS)
|
||||||
|
┌─────────────────────────▼──────────────────────────────────┐
|
||||||
|
│ DMS API (NestJS) ◀── enforces CASL, validation, audit │
|
||||||
|
│ ├─ AiGateway (proxies to Ollama) │
|
||||||
|
│ └─ DB + Storage (Elasticsearch, MariaDB, File System) │
|
||||||
|
└─────────────────────────┬──────────────────────────────────┘
|
||||||
|
│ (HTTP → Admin Desktop, internal)
|
||||||
|
┌─────────────────────────▼──────────────────────────────────┐
|
||||||
|
│ Admin Desktop (Desk-5439) │
|
||||||
|
│ ├─ Ollama (Gemma 4) │
|
||||||
|
│ ├─ PaddleOCR (Thai + English) │
|
||||||
|
│ └─ n8n orchestration │
|
||||||
|
└────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**❗ Admin Desktop has NO network access to MariaDB, no SMB to storage, no shared secrets.** It receives base64-encoded file bytes over HTTPS and returns extracted text + suggestions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Required Patterns
|
||||||
|
|
||||||
|
### 1. AiGateway Module (backend)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
@Module({
|
||||||
|
controllers: [AiController],
|
||||||
|
providers: [AiService, AiGateway, AiAuditLogger],
|
||||||
|
exports: [AiService],
|
||||||
|
})
|
||||||
|
export class AiModule {}
|
||||||
|
|
||||||
|
@Injectable()
|
||||||
|
export class AiService {
|
||||||
|
async extractMetadata(fileId: number, user: User): Promise<ExtractedMetadata> {
|
||||||
|
// 1. Authorize (CASL: user can read this file)
|
||||||
|
await this.ability.ensureCan(user, 'read', File, fileId);
|
||||||
|
|
||||||
|
// 2. Load file (DMS API, inside the boundary)
|
||||||
|
const fileBytes = await this.storageService.read(fileId);
|
||||||
|
|
||||||
|
// 3. Call Admin Desktop AI over HTTP
|
||||||
|
const raw = await this.aiGateway.extract(fileBytes);
|
||||||
|
|
||||||
|
// 4. Validate AI output schema (Zod)
|
||||||
|
const parsed = ExtractedMetadataSchema.parse(raw);
|
||||||
|
|
||||||
|
// 5. Audit log (who, what, when, model, confidence)
|
||||||
|
await this.auditLogger.log({
|
||||||
|
userId: user.id,
|
||||||
|
action: 'ai.extract_metadata',
|
||||||
|
fileId,
|
||||||
|
model: raw.model,
|
||||||
|
confidence: parsed.confidence,
|
||||||
|
});
|
||||||
|
|
||||||
|
// 6. Return — frontend MUST render for human confirmation
|
||||||
|
return parsed;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Human-in-the-Loop
|
||||||
|
|
||||||
|
AI output is **never persisted directly**. Users must confirm via `DocumentReviewForm`:
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
<DocumentReviewForm
|
||||||
|
document={doc}
|
||||||
|
aiSuggestions={suggestions}
|
||||||
|
onConfirm={(reviewed) => saveMetadata(reviewed)} // user edits applied
|
||||||
|
/>
|
||||||
|
```
|
||||||
|
|
||||||
|
The `user_confirmed_at` timestamp and diff (AI suggestion → final value) are stored in the audit log.
|
||||||
|
|
||||||
|
### 3. Rate Limiting
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
@Post('ai/extract')
|
||||||
|
@UseGuards(JwtAuthGuard, CaslAbilityGuard, ThrottlerGuard)
|
||||||
|
@Throttle({ default: { limit: 10, ttl: 60_000 } }) // 10 req/min/user
|
||||||
|
async extract(@Body() dto: ExtractDto) { /* ... */ }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ❌ Forbidden
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ AI container connecting to DB
|
||||||
|
// docker-compose.yml inside ai-service:
|
||||||
|
// environment:
|
||||||
|
// DATABASE_URL: mysql://... ← NEVER
|
||||||
|
|
||||||
|
// ❌ AI SDK calling cloud API
|
||||||
|
import OpenAI from 'openai'; // ❌ No cloud AI SDKs in production code
|
||||||
|
const client = new OpenAI({ apiKey: ... });
|
||||||
|
|
||||||
|
// ❌ Persisting AI output without human confirm
|
||||||
|
async extractAndSave(fileId: number) {
|
||||||
|
const metadata = await this.ai.extract(fileId);
|
||||||
|
await this.repo.save({ fileId, ...metadata }); // ❌ skips human review
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ Skipping audit log
|
||||||
|
const result = await this.aiGateway.extract(bytes); // no logging
|
||||||
|
return result;
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Audit Log Schema
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE ai_audit_log (
|
||||||
|
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
public_id UUID UNIQUE NOT NULL,
|
||||||
|
user_id INT NOT NULL,
|
||||||
|
action VARCHAR(64) NOT NULL, -- 'ai.extract_metadata', 'ai.classify', etc.
|
||||||
|
file_id INT,
|
||||||
|
model VARCHAR(64), -- 'gemma-4:7b', 'paddleocr-v3'
|
||||||
|
confidence DECIMAL(4,3),
|
||||||
|
input_hash CHAR(64), -- SHA-256 of input for replay detection
|
||||||
|
output_summary JSON,
|
||||||
|
human_confirmed_at DATETIME,
|
||||||
|
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
INDEX idx_user_created (user_id, created_at),
|
||||||
|
INDEX idx_file (file_id)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Reference
|
||||||
|
|
||||||
|
- [ADR-018 AI Boundary](../../../../specs/06-Decision-Records/ADR-018-ai-boundary.md)
|
||||||
|
- [ADR-020 AI Intelligence Integration](../../../../specs/06-Decision-Records/ADR-020-ai-intelligence-integration.md)
|
||||||
|
- [ADR-017 Ollama Data Migration](../../../../specs/06-Decision-Records/ADR-017-ollama-data-migration.md)
|
||||||
@@ -0,0 +1,181 @@
|
|||||||
|
---
|
||||||
|
title: Workflow Engine + Document Numbering + Workflow Context (ADR-001 / 002 / 021)
|
||||||
|
impact: CRITICAL
|
||||||
|
impactDescription: DSL-based state machine; double-lock numbering; integrated workflow context exposed to clients.
|
||||||
|
tags: workflow, numbering, redlock, version-column, adr-001, adr-002, adr-021
|
||||||
|
---
|
||||||
|
|
||||||
|
## Workflow Engine + Numbering + Context
|
||||||
|
|
||||||
|
LCBP3 uses a **unified workflow engine** (DSL-based state machine) across RFA, Transmittal, Correspondence, Circulation, and Shop Drawing. Every state transition goes through the same engine — no per-type routing tables.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ADR-001: Unified Workflow Engine
|
||||||
|
|
||||||
|
### State Transition Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
@Injectable()
|
||||||
|
export class WorkflowEngine {
|
||||||
|
async transition(
|
||||||
|
instanceId: string,
|
||||||
|
action: WorkflowAction,
|
||||||
|
actor: User,
|
||||||
|
context?: WorkflowContext,
|
||||||
|
): Promise<WorkflowInstance> {
|
||||||
|
// 1. Load current state from DB (never trust client-provided state)
|
||||||
|
const instance = await this.repo.findOneByPublicId(instanceId);
|
||||||
|
if (!instance) throw new NotFoundException();
|
||||||
|
|
||||||
|
// 2. Validate transition against DSL
|
||||||
|
const dsl = await this.dslService.load(instance.workflowTypeId);
|
||||||
|
const nextState = dsl.resolve(instance.currentState, action);
|
||||||
|
if (!nextState) {
|
||||||
|
throw new BusinessException(
|
||||||
|
`Action ${action} not allowed from state ${instance.currentState}`,
|
||||||
|
'ไม่สามารถดำเนินการนี้ได้ในสถานะปัจจุบัน',
|
||||||
|
'กรุณาตรวจสอบขั้นตอนการอนุมัติ',
|
||||||
|
'WF_INVALID_TRANSITION',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Apply transition atomically (optimistic lock via @VersionColumn)
|
||||||
|
instance.currentState = nextState;
|
||||||
|
await this.repo.save(instance); // throws OptimisticLockVersionMismatchError on race
|
||||||
|
|
||||||
|
// 4. Emit event for listeners (notifications via BullMQ — ADR-008)
|
||||||
|
this.eventBus.publish(new WorkflowTransitionedEvent(instance, action, actor));
|
||||||
|
|
||||||
|
return instance;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❌ Anti-Patterns
|
||||||
|
|
||||||
|
- ❌ Hard-coded `switch (state)` in controllers/services
|
||||||
|
- ❌ Trusting `currentState` from request body
|
||||||
|
- ❌ Creating separate routing tables per document type
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ADR-002: Document Numbering (Double-Lock)
|
||||||
|
|
||||||
|
Concurrent requests for a new document number **must** use both:
|
||||||
|
|
||||||
|
1. **Redis Redlock** — distributed lock across app instances
|
||||||
|
2. **TypeORM `@VersionColumn`** — optimistic lock on counter row
|
||||||
|
|
||||||
|
### Counter Entity
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
@Entity('document_number_counters')
|
||||||
|
@Unique(['projectId', 'documentTypeId'])
|
||||||
|
export class DocumentNumberCounter extends UuidBaseEntity {
|
||||||
|
@Column({ name: 'project_id' })
|
||||||
|
projectId: number;
|
||||||
|
|
||||||
|
@Column({ name: 'document_type_id' })
|
||||||
|
documentTypeId: number;
|
||||||
|
|
||||||
|
@Column({ name: 'last_number', default: 0 })
|
||||||
|
lastNumber: number;
|
||||||
|
|
||||||
|
@VersionColumn()
|
||||||
|
version: number; // ❗ Optimistic lock — do not rename, do not remove
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
@Injectable()
|
||||||
|
export class DocumentNumberingService {
|
||||||
|
constructor(
|
||||||
|
@InjectRepository(DocumentNumberCounter)
|
||||||
|
private counterRepo: Repository<DocumentNumberCounter>,
|
||||||
|
private redlock: RedlockService,
|
||||||
|
private readonly logger: Logger,
|
||||||
|
) {}
|
||||||
|
|
||||||
|
async generateNext(ctx: NumberingContext): Promise<string> {
|
||||||
|
const lockKey = `doc_num:${ctx.projectId}:${ctx.documentTypeId}`;
|
||||||
|
|
||||||
|
// Distributed lock — 3s TTL, up to 5 retries
|
||||||
|
const lock = await this.redlock.acquire([lockKey], 3000);
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Optimistic lock via @VersionColumn
|
||||||
|
const counter = await this.counterRepo.findOne({
|
||||||
|
where: { projectId: ctx.projectId, documentTypeId: ctx.documentTypeId },
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!counter) {
|
||||||
|
throw new NotFoundException('Counter not initialized for this project/type');
|
||||||
|
}
|
||||||
|
|
||||||
|
counter.lastNumber += 1;
|
||||||
|
await this.counterRepo.save(counter); // may throw OptimisticLockVersionMismatchError
|
||||||
|
|
||||||
|
return this.formatNumber(ctx, counter.lastNumber);
|
||||||
|
} catch (err) {
|
||||||
|
if (err instanceof OptimisticLockVersionMismatchError) {
|
||||||
|
this.logger.warn(`Numbering race detected for ${lockKey}, retrying`);
|
||||||
|
// Let caller retry via BullMQ retry policy
|
||||||
|
}
|
||||||
|
throw err;
|
||||||
|
} finally {
|
||||||
|
await lock.release();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private formatNumber(ctx: NumberingContext, seq: number): string {
|
||||||
|
// e.g. "LCBP3-RFA-0042"
|
||||||
|
return `${ctx.projectCode}-${ctx.typeCode}-${String(seq).padStart(4, '0')}`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❌ Anti-Patterns
|
||||||
|
|
||||||
|
- ❌ App-side counter only (`let counter = 0; counter++`)
|
||||||
|
- ❌ Using `findOne` + `update` without `@VersionColumn`
|
||||||
|
- ❌ Using only Redis lock without DB optimistic lock (race if Redis fails)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ADR-021: Integrated Workflow Context
|
||||||
|
|
||||||
|
Every workflow-aware API response **must** expose:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export class WorkflowEnvelope<T> {
|
||||||
|
data: T;
|
||||||
|
|
||||||
|
workflow: {
|
||||||
|
instancePublicId: string;
|
||||||
|
currentState: string; // e.g. 'pending_review'
|
||||||
|
availableActions: string[]; // e.g. ['approve', 'reject', 'request-revision']
|
||||||
|
canEdit: boolean; // computed from CASL + current state
|
||||||
|
lastTransitionAt: string; // ISO 8601
|
||||||
|
};
|
||||||
|
|
||||||
|
stepAttachments?: Array<{ // files produced by the current/previous step
|
||||||
|
publicId: string;
|
||||||
|
fileName: string;
|
||||||
|
stepCode: string;
|
||||||
|
downloadUrl: string;
|
||||||
|
}>;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Frontend uses `workflow.availableActions` to render buttons — no client-side state machine logic.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Reference
|
||||||
|
|
||||||
|
- [ADR-001 Unified Workflow Engine](../../../../specs/06-Decision-Records/ADR-001-unified-workflow-engine.md)
|
||||||
|
- [ADR-002 Document Numbering Strategy](../../../../specs/06-Decision-Records/ADR-002-document-numbering-strategy.md)
|
||||||
|
- [ADR-021 Workflow Context](../../../../specs/06-Decision-Records/ADR-021-workflow-context.md)
|
||||||
@@ -0,0 +1,137 @@
|
|||||||
|
---
|
||||||
|
title: Two-Phase File Upload + ClamAV (ADR-016)
|
||||||
|
impact: CRITICAL
|
||||||
|
impactDescription: Upload → Temp → ClamAV scan → Commit → Permanent. Whitelist + 50MB cap. StorageService only.
|
||||||
|
tags: file-upload, clamav, security, adr-016, storage
|
||||||
|
---
|
||||||
|
|
||||||
|
## Two-Phase File Upload (ADR-016)
|
||||||
|
|
||||||
|
**Never write uploaded files directly to permanent storage.** All uploads must go through:
|
||||||
|
|
||||||
|
```
|
||||||
|
Client → Upload endpoint → Temp storage → ClamAV scan → Commit endpoint → Permanent storage
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Constraints (non-negotiable)
|
||||||
|
|
||||||
|
| Rule | Value |
|
||||||
|
| --- | --- |
|
||||||
|
| Allowed MIME types | `application/pdf`, `image/vnd.dwg`, `application/vnd.openxmlformats-officedocument.wordprocessingml.document`, `application/vnd.openxmlformats-officedocument.spreadsheetml.sheet`, `application/zip` |
|
||||||
|
| Allowed extensions | `.pdf`, `.dwg`, `.docx`, `.xlsx`, `.zip` |
|
||||||
|
| Max size | 50 MB |
|
||||||
|
| Temp TTL | 24 h (purged by cron) |
|
||||||
|
| Virus scan | ClamAV (blocking) |
|
||||||
|
| Mover | `StorageService` only — never `fs.rename` directly from controller |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 1: Upload to Temp
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
@Post('upload')
|
||||||
|
@UseGuards(JwtAuthGuard, ThrottlerGuard)
|
||||||
|
@UseInterceptors(FileInterceptor('file', {
|
||||||
|
limits: { fileSize: 50 * 1024 * 1024 }, // 50 MB
|
||||||
|
}))
|
||||||
|
async uploadTemp(
|
||||||
|
@UploadedFile() file: Express.Multer.File,
|
||||||
|
@CurrentUser() user: User,
|
||||||
|
): Promise<{ tempId: string; expiresAt: string }> {
|
||||||
|
// 1. Validate MIME + extension (defense in depth)
|
||||||
|
this.fileValidator.assertAllowed(file);
|
||||||
|
|
||||||
|
// 2. Scan with ClamAV
|
||||||
|
const scanResult = await this.clamavService.scan(file.buffer);
|
||||||
|
if (!scanResult.clean) {
|
||||||
|
throw new BusinessException(
|
||||||
|
`ClamAV rejected: ${scanResult.signature}`,
|
||||||
|
'ไฟล์ไม่ปลอดภัย ระบบตรวจพบความเสี่ยง',
|
||||||
|
'กรุณาตรวจสอบไฟล์และลองใหม่อีกครั้ง',
|
||||||
|
'FILE_INFECTED',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Save to temp (encrypted at rest)
|
||||||
|
const tempId = await this.storageService.saveToTemp(file, user.id);
|
||||||
|
|
||||||
|
return {
|
||||||
|
tempId,
|
||||||
|
expiresAt: addHours(new Date(), 24).toISOString(),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: Commit in Transaction
|
||||||
|
|
||||||
|
The business operation (e.g., creating a Correspondence) promotes temp files to permanent **in the same DB transaction**.
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async createCorrespondence(dto: CreateCorrespondenceDto, user: User) {
|
||||||
|
return this.dataSource.transaction(async (manager) => {
|
||||||
|
// 1. Create domain entity
|
||||||
|
const entity = await manager.save(Correspondence, {
|
||||||
|
...dto,
|
||||||
|
createdById: user.id,
|
||||||
|
});
|
||||||
|
|
||||||
|
// 2. Commit temp files → permanent (ACID together with entity)
|
||||||
|
await this.storageService.commitFiles(
|
||||||
|
dto.tempFileIds,
|
||||||
|
{ entityId: entity.id, entityType: 'correspondence' },
|
||||||
|
manager,
|
||||||
|
);
|
||||||
|
|
||||||
|
return entity;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
If the transaction rolls back, temp files remain and expire in 24h — no orphaned permanent files.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## StorageService Contract
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export interface StorageService {
|
||||||
|
saveToTemp(file: Express.Multer.File, ownerId: number): Promise<string>;
|
||||||
|
commitFiles(
|
||||||
|
tempIds: string[],
|
||||||
|
target: { entityId: number; entityType: string },
|
||||||
|
manager: EntityManager,
|
||||||
|
): Promise<FileRecord[]>;
|
||||||
|
purgeExpiredTemp(): Promise<number>; // called by cron
|
||||||
|
getPermanentPath(fileId: number): Promise<string>;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ❌ Forbidden
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ Direct write to permanent
|
||||||
|
fs.writeFileSync(`/var/storage/${file.originalname}`, file.buffer);
|
||||||
|
|
||||||
|
// ❌ Skip ClamAV
|
||||||
|
await this.storageService.savePermanent(file);
|
||||||
|
|
||||||
|
// ❌ Non-whitelist MIME
|
||||||
|
@UseInterceptors(FileInterceptor('file')) // no size or type limit
|
||||||
|
|
||||||
|
// ❌ Commit outside transaction
|
||||||
|
const entity = await this.repo.save(...);
|
||||||
|
await this.storageService.commitFiles(tempIds, ...); // race: entity exists, files may fail
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Reference
|
||||||
|
|
||||||
|
- [ADR-016 Security & Authentication](../../../../specs/06-Decision-Records/ADR-016-security-authentication.md)
|
||||||
|
- [Edge Cases](../../../../specs/01-Requirements/01-06-edge-cases-and-rules.md) — file upload scenarios
|
||||||
@@ -32,6 +32,7 @@ const CATEGORIES = [
|
|||||||
{ prefix: 'api-', name: 'API Design', impact: 'MEDIUM', section: 8 },
|
{ prefix: 'api-', name: 'API Design', impact: 'MEDIUM', section: 8 },
|
||||||
{ prefix: 'micro-', name: 'Microservices', impact: 'MEDIUM', section: 9 },
|
{ prefix: 'micro-', name: 'Microservices', impact: 'MEDIUM', section: 9 },
|
||||||
{ prefix: 'devops-', name: 'DevOps & Deployment', impact: 'LOW-MEDIUM', section: 10 },
|
{ prefix: 'devops-', name: 'DevOps & Deployment', impact: 'LOW-MEDIUM', section: 10 },
|
||||||
|
{ prefix: 'lcbp3-', name: 'LCBP3 Project-Specific', impact: 'CRITICAL', section: 11 },
|
||||||
];
|
];
|
||||||
|
|
||||||
interface RuleFrontmatter {
|
interface RuleFrontmatter {
|
||||||
@@ -50,8 +51,10 @@ interface Rule {
|
|||||||
}
|
}
|
||||||
|
|
||||||
function parseFrontmatter(content: string): { frontmatter: RuleFrontmatter | null; body: string } {
|
function parseFrontmatter(content: string): { frontmatter: RuleFrontmatter | null; body: string } {
|
||||||
|
// Normalize CRLF → LF so the regex works on Windows-authored files
|
||||||
|
const normalized = content.replace(/\r\n/g, '\n');
|
||||||
const frontmatterRegex = /^---\n([\s\S]*?)\n---\n([\s\S]*)$/;
|
const frontmatterRegex = /^---\n([\s\S]*?)\n---\n([\s\S]*)$/;
|
||||||
const match = content.match(frontmatterRegex);
|
const match = normalized.match(frontmatterRegex);
|
||||||
|
|
||||||
if (!match) {
|
if (!match) {
|
||||||
return { frontmatter: null, body: content };
|
return { frontmatter: null, body: content };
|
||||||
|
|||||||
@@ -1,6 +1,8 @@
|
|||||||
---
|
---
|
||||||
name: next-best-practices
|
name: next-best-practices
|
||||||
description: Next.js best practices - file conventions, RSC boundaries, data patterns, async APIs, metadata, error handling, route handlers, image/font optimization, bundling
|
description: Next.js best practices for LCBP3-DMS frontend. Enforces ADR-019 (publicId only, no parseInt/id fallback), TanStack Query + RHF + Zod, shadcn/ui, i18n, ADR-007 error UX, ADR-021 IntegratedBanner/WorkflowLifecycle, two-phase file upload.
|
||||||
|
version: 1.8.9
|
||||||
|
scope: frontend
|
||||||
user-invocable: false
|
user-invocable: false
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -157,6 +159,24 @@ See [parallel-routes.md](./parallel-routes.md) for:
|
|||||||
- `default.tsx` for fallbacks
|
- `default.tsx` for fallbacks
|
||||||
- Closing modals correctly with `router.back()`
|
- Closing modals correctly with `router.back()`
|
||||||
|
|
||||||
|
## i18n (Thai / English)
|
||||||
|
|
||||||
|
See [i18n.md](./i18n.md) for:
|
||||||
|
|
||||||
|
- `useTranslations('namespace')` pattern
|
||||||
|
- Key naming (kebab-case, feature-namespaced)
|
||||||
|
- When Zod messages stay inline vs i18n
|
||||||
|
- Server-side `userMessage` passthrough
|
||||||
|
|
||||||
|
## Two-Phase File Upload
|
||||||
|
|
||||||
|
See [two-phase-upload.md](./two-phase-upload.md) for:
|
||||||
|
|
||||||
|
- `useDropzone` + `useMutation` hook
|
||||||
|
- `tempFileIds` form-state pattern
|
||||||
|
- Whitelist MIME / max-size (must mirror backend)
|
||||||
|
- Clear-on-submit / expired-temp handling
|
||||||
|
|
||||||
## Self-Hosting
|
## Self-Hosting
|
||||||
|
|
||||||
See [self-hosting.md](./self-hosting.md) for:
|
See [self-hosting.md](./self-hosting.md) for:
|
||||||
@@ -204,28 +224,38 @@ const form = useForm({
|
|||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
### ADR-019 UUID Handling (CRITICAL)
|
### ADR-019 UUID Handling (CRITICAL — March 2026 Pattern)
|
||||||
|
|
||||||
|
> **Updated:** ใช้ `publicId` ตรงๆ — ห้ามใช้ `id ?? ''` fallback หรือ `uuid` ร่วม.
|
||||||
|
|
||||||
```tsx
|
```tsx
|
||||||
// Interface ต้องมีทั้ง id และ publicId
|
// ✅ CORRECT — Interface มีแค่ publicId
|
||||||
interface Contract {
|
interface Contract {
|
||||||
id?: number; // Internal (อาจ undefined)
|
publicId?: string; // UUID from API — ใช้ตัวนี้
|
||||||
publicId?: string; // UUID - ใช้ตัวนี้
|
|
||||||
contractCode: string;
|
contractCode: string;
|
||||||
|
contractName: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Select options - ใช้ pattern นี้เสมอ
|
// ✅ CORRECT — Select options (ไม่มี fallback)
|
||||||
const options = contracts.map((c) => ({
|
const options = contracts.map((c) => ({
|
||||||
label: `${c.contractName} (${c.contractCode})`,
|
label: `${c.contractName} (${c.contractCode})`,
|
||||||
value: String(c.publicId ?? c.id ?? ''), // fallback pattern
|
value: c.publicId ?? '', // ใช้ publicId ล้วน
|
||||||
key: String(c.publicId ?? c.id ?? ''),
|
key: c.publicId ?? c.contractCode, // fallback ไป business field ได้
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// ❌ ห้ามใช้ parseInt บน UUID
|
// ❌ WRONG — pattern เก่า (ห้าม)
|
||||||
// const id = parseInt(projectId); // WRONG!
|
interface OldContract {
|
||||||
|
id?: number; // ❌ อย่า expose INT id
|
||||||
|
uuid?: string; // ❌ ใช้ชื่อ uuid
|
||||||
|
publicId?: string;
|
||||||
|
}
|
||||||
|
const oldValue = String(c.publicId ?? c.id ?? ''); // ❌ `id ?? ''` fallback ห้าม
|
||||||
|
|
||||||
// ✅ ส่ง UUID string ตรงๆ
|
// ❌ NEVER parseInt on UUID
|
||||||
apiClient.get(`/projects/${projectId}`); // projectId is UUID string
|
// const badId = parseInt(projectPublicId); // "019505..." → 19 (WRONG!)
|
||||||
|
|
||||||
|
// ✅ ส่ง UUID string ตรงๆ ไป API
|
||||||
|
apiClient.get(`/projects/${projectPublicId}`);
|
||||||
```
|
```
|
||||||
|
|
||||||
### Naming Conventions
|
### Naming Conventions
|
||||||
@@ -312,13 +342,17 @@ apiClient.interceptors.request.use((config) => {
|
|||||||
|
|
||||||
### Anti-Patterns (ห้ามทำ)
|
### Anti-Patterns (ห้ามทำ)
|
||||||
|
|
||||||
- ❌ Fetch data ใน useEffect โดยตรง
|
- ❌ Fetch data ใน useEffect โดยตรง (ใช้ TanStack Query)
|
||||||
- ❌ Props drilling ลึกเกิน 3 levels
|
- ❌ Props drilling ลึกเกิน 3 levels
|
||||||
- ❌ Inline styles (ใช้ Tailwind)
|
- ❌ Inline styles (ใช้ Tailwind)
|
||||||
- ❌ console.log ใน production
|
- ❌ `console.log` ใน committed code
|
||||||
- ❌ parseInt() บน UUID values
|
- ❌ `parseInt()` / `Number()` / `+` บน UUID values (ADR-019)
|
||||||
|
- ❌ `id ?? ''` fallback บน `publicId` (ใช้ `publicId ?? ''` หรือ fallback ไป business field)
|
||||||
|
- ❌ Expose `uuid` คู่กับ `publicId` ใน interface (ใช้ `publicId` อย่างเดียว)
|
||||||
- ❌ ใช้ index เป็น key ใน list
|
- ❌ ใช้ index เป็น key ใน list
|
||||||
- ❌ Snake_case ใน form field names (ใช้ camelCase)
|
- ❌ Snake_case ใน form field names (ใช้ camelCase)
|
||||||
|
- ❌ Hardcode Thai/English string ใน component (ใช้ i18n keys)
|
||||||
|
- ❌ `any` type (strict mode)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -0,0 +1,79 @@
|
|||||||
|
# i18n (Thai / English)
|
||||||
|
|
||||||
|
LCBP3 frontend **must not** hardcode Thai or English UI strings in components.
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
1. **All user-facing strings go through the i18n layer** (`next-intl` / `i18next` — check `frontend/package.json`).
|
||||||
|
2. **Keys use kebab-case**, namespaced by feature:
|
||||||
|
- `correspondence.list.title`
|
||||||
|
- `correspondence.form.submit`
|
||||||
|
- `common.actions.cancel`
|
||||||
|
3. **Comments in code remain Thai** (business logic explanation); **only UI copy** goes through i18n.
|
||||||
|
4. **Error messages** from backend (via ADR-007 `userMessage`) are already localized server-side — render them directly, don't translate client-side.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ❌ Wrong
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
export function CorrespondenceHeader() {
|
||||||
|
return <h1>รายการหนังสือติดต่อ</h1>; // ❌ hardcoded Thai
|
||||||
|
}
|
||||||
|
|
||||||
|
toast.success('บันทึกสำเร็จ'); // ❌ hardcoded
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ Right
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
import { useTranslations } from 'next-intl';
|
||||||
|
|
||||||
|
export function CorrespondenceHeader() {
|
||||||
|
const t = useTranslations('correspondence.list');
|
||||||
|
return <h1>{t('title')}</h1>;
|
||||||
|
}
|
||||||
|
|
||||||
|
toast.success(t('save.success'));
|
||||||
|
```
|
||||||
|
|
||||||
|
Translation files:
|
||||||
|
|
||||||
|
```json
|
||||||
|
// messages/th.json
|
||||||
|
{
|
||||||
|
"correspondence": {
|
||||||
|
"list": { "title": "รายการหนังสือติดต่อ" },
|
||||||
|
"save": { "success": "บันทึกสำเร็จ" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// messages/en.json
|
||||||
|
{
|
||||||
|
"correspondence": {
|
||||||
|
"list": { "title": "Correspondence List" },
|
||||||
|
"save": { "success": "Saved successfully" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Zod Error Messages
|
||||||
|
|
||||||
|
Zod error messages shown in forms **do** stay in Thai inline (per `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`), because they're schema-bound and rarely need translation. If dual-language support becomes required, wrap with an i18n-aware resolver:
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const schema = z.object({
|
||||||
|
projectUuid: z.string().uuid(t('validation.project.required')),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Reference
|
||||||
|
|
||||||
|
- [i18n Guidelines](../../../specs/05-Engineering-Guidelines/05-08-i18n-guidelines.md)
|
||||||
|
- [Frontend Guidelines](../../../specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md)
|
||||||
@@ -0,0 +1,100 @@
|
|||||||
|
# Two-Phase File Upload (Frontend)
|
||||||
|
|
||||||
|
Pair with [backend two-phase upload rule](../nestjs-best-practices/rules/security-file-two-phase-upload.md).
|
||||||
|
|
||||||
|
## Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
User drops file
|
||||||
|
→ POST /files/upload (temp) → { tempId, expiresAt }
|
||||||
|
→ store tempId in form state
|
||||||
|
→ user submits form
|
||||||
|
→ POST /correspondences (with tempFileIds) → backend commits in transaction
|
||||||
|
```
|
||||||
|
|
||||||
|
## Hook Pattern
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
'use client';
|
||||||
|
|
||||||
|
import { useDropzone } from 'react-dropzone';
|
||||||
|
import { useMutation } from '@tanstack/react-query';
|
||||||
|
|
||||||
|
export function useTwoPhaseUpload() {
|
||||||
|
const uploadTemp = useMutation({
|
||||||
|
mutationFn: async (file: File) => {
|
||||||
|
const fd = new FormData();
|
||||||
|
fd.append('file', file);
|
||||||
|
const { data } = await apiClient.post<{ tempId: string; expiresAt: string }>(
|
||||||
|
'/files/upload',
|
||||||
|
fd,
|
||||||
|
);
|
||||||
|
return data;
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
return uploadTemp;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Form Integration (RHF)
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
export function CorrespondenceForm() {
|
||||||
|
const form = useForm<FormData>({ resolver: zodResolver(schema) });
|
||||||
|
const uploadTemp = useTwoPhaseUpload();
|
||||||
|
const [tempFileIds, setTempFileIds] = useState<string[]>([]);
|
||||||
|
|
||||||
|
const { getRootProps, getInputProps } = useDropzone({
|
||||||
|
accept: {
|
||||||
|
'application/pdf': ['.pdf'],
|
||||||
|
'image/vnd.dwg': ['.dwg'],
|
||||||
|
'application/vnd.openxmlformats-officedocument.wordprocessingml.document': ['.docx'],
|
||||||
|
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet': ['.xlsx'],
|
||||||
|
'application/zip': ['.zip'],
|
||||||
|
},
|
||||||
|
maxSize: 50 * 1024 * 1024, // 50 MB — must match backend
|
||||||
|
onDrop: async (files) => {
|
||||||
|
const results = await Promise.all(files.map((f) => uploadTemp.mutateAsync(f)));
|
||||||
|
setTempFileIds((prev) => [...prev, ...results.map((r) => r.tempId)]);
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
const onSubmit = async (values: FormData) => {
|
||||||
|
await correspondenceService.create({
|
||||||
|
...values,
|
||||||
|
tempFileIds, // committed server-side in the same DB transaction
|
||||||
|
});
|
||||||
|
setTempFileIds([]);
|
||||||
|
};
|
||||||
|
|
||||||
|
return (
|
||||||
|
<form onSubmit={form.handleSubmit(onSubmit)}>
|
||||||
|
<div {...getRootProps()} className="dropzone">
|
||||||
|
<input {...getInputProps()} />
|
||||||
|
<p>{t('upload.dragDrop')}</p>
|
||||||
|
</div>
|
||||||
|
{/* other fields */}
|
||||||
|
</form>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
- **Whitelist MIME types** — must mirror backend ADR-016 whitelist (`.pdf`, `.dwg`, `.docx`, `.xlsx`, `.zip`).
|
||||||
|
- **50 MB cap** — enforce client-side too (better UX) plus server-side (authoritative).
|
||||||
|
- **Show temp-file pills** with remove button — users see what will be attached.
|
||||||
|
- **Clear `tempFileIds` on success/cancel** — prevent stale IDs on subsequent submits.
|
||||||
|
- **No retry of expired temps** — if `expiresAt` passed, prompt re-upload.
|
||||||
|
|
||||||
|
## ❌ Forbidden
|
||||||
|
|
||||||
|
- ❌ Uploading directly to permanent storage endpoint (no commit phase)
|
||||||
|
- ❌ Hardcoded MIME list in component (keep in shared constant file mirrored from backend)
|
||||||
|
- ❌ Ignoring `maxSize` — backend will reject but UX suffers
|
||||||
|
|
||||||
|
## Reference
|
||||||
|
|
||||||
|
- [ADR-016 Security](../../../specs/06-Decision-Records/ADR-016-security-authentication.md)
|
||||||
|
- Backend rule: [`security-file-two-phase-upload.md`](../nestjs-best-practices/rules/security-file-two-phase-upload.md)
|
||||||
@@ -1,17 +1,19 @@
|
|||||||
# UUID Handling (ADR-019)
|
# UUID Handling (ADR-019) — March 2026 Pattern
|
||||||
|
|
||||||
**Project-specific: Hybrid Identifier Strategy for NAP-DMS**
|
**Project-specific: Hybrid Identifier Strategy for NAP-DMS**
|
||||||
|
|
||||||
This project uses ADR-019: INT Primary Key (internal) + UUIDv7 (public API). Frontend code must handle this correctly.
|
This project uses ADR-019: INT Primary Key (internal) + UUIDv7 (public API). Frontend code must handle this correctly.
|
||||||
|
|
||||||
|
> **Updated pattern:** Backend exposes `publicId` directly — ไม่มี `@Expose({ name: 'id' })` rename แล้ว. Frontend ใช้ `publicId` ตรงๆ — ห้าม fallback ไป `id`.
|
||||||
|
|
||||||
## The Pattern
|
## The Pattern
|
||||||
|
|
||||||
| Source | Field Name | Type | Notes |
|
| Source | Field Name | Type | Notes |
|
||||||
|--------|------------|------|-------|
|
| ------------------------ | ------------------- | ----------------- | ----------------------------------------------------------- |
|
||||||
| **API Response** | `id` | `string` (UUID) | Actually `publicId` exposed via `@Expose({ name: 'id' })` |
|
| **API Response** | `publicId` | `string` (UUIDv7) | Exposed directly (no rename) |
|
||||||
| **TypeScript Interface** | `publicId?: string` | UUID string | Use this for all references |
|
| **TypeScript Interface** | `publicId?: string` | UUID string | ใช้ตัวนี้เท่านั้น |
|
||||||
| **Fallback** | `id?: number` | INT (internal) | May be undefined due to `@Exclude()` |
|
| **Form DTO** | `xxxUuid` | `string` | DTO field names: `projectUuid`, `contractUuid` (input only) |
|
||||||
| **Form Values** | `xxxUuid` | `string` | DTO field names: `projectUuid`, `contractUuid` |
|
| **URL param** | `[publicId]` | `string` (UUID) | e.g. `/correspondences/[publicId]/page.tsx` |
|
||||||
|
|
||||||
## Critical Rules
|
## Critical Rules
|
||||||
|
|
||||||
@@ -31,22 +33,26 @@ const id = +projectId; // NaN
|
|||||||
apiClient.get(`/projects/${projectId}`); // projectId is already UUID string
|
apiClient.get(`/projects/${projectId}`); // projectId is already UUID string
|
||||||
```
|
```
|
||||||
|
|
||||||
### 2. Use `publicId ?? id` Pattern
|
### 2. Use `publicId` Only — NO `id ?? ''` Fallback
|
||||||
|
|
||||||
```tsx
|
```tsx
|
||||||
// types/project.ts
|
// ✅ CORRECT — types/project.ts
|
||||||
interface Project {
|
interface Project {
|
||||||
id?: number; // Internal INT (may be undefined)
|
publicId?: string; // UUID from API — ใช้ตัวนี้เท่านั้น
|
||||||
publicId?: string; // UUID from API (use this)
|
|
||||||
projectCode: string;
|
projectCode: string;
|
||||||
projectName: string;
|
projectName: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Component usage
|
// ✅ CORRECT — Component usage
|
||||||
const projectOptions = projects.map((p) => ({
|
const projectOptions = projects.map((p) => ({
|
||||||
label: `${p.projectName} (${p.projectCode})`,
|
label: `${p.projectName} (${p.projectCode})`,
|
||||||
value: String(p.publicId ?? p.id ?? ''), // ADR-019 pattern
|
value: p.publicId ?? '', // ADR-019 — ไม่ต้อง String() และไม่ไป id
|
||||||
key: String(p.publicId ?? p.id ?? ''),
|
key: p.publicId ?? p.projectCode, // fallback ไป business field ได้
|
||||||
|
}));
|
||||||
|
|
||||||
|
// ❌ WRONG — pattern เก่า
|
||||||
|
const oldOptions = projects.map((p) => ({
|
||||||
|
value: String(p.publicId ?? p.id ?? ''), // ❌ `id ?? ''` fallback
|
||||||
}));
|
}));
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -84,11 +90,10 @@ export function ContractSelect({ contracts, value, onChange }: ContractSelectPro
|
|||||||
<SelectValue placeholder="เลือกสัญญา" />
|
<SelectValue placeholder="เลือกสัญญา" />
|
||||||
</SelectTrigger>
|
</SelectTrigger>
|
||||||
<SelectContent>
|
<SelectContent>
|
||||||
{contracts.map((c) => (
|
{contracts
|
||||||
<SelectItem
|
.filter((c) => !!c.publicId) // กรอง contract ที่มี publicId เท่านั้น
|
||||||
key={String(c.publicId ?? c.id ?? '')}
|
.map((c) => (
|
||||||
value={String(c.publicId ?? c.id ?? '')}
|
<SelectItem key={c.publicId} value={c.publicId!}>
|
||||||
>
|
|
||||||
{c.contractName} ({c.contractCode})
|
{c.contractName} ({c.contractCode})
|
||||||
</SelectItem>
|
</SelectItem>
|
||||||
))}
|
))}
|
||||||
@@ -113,7 +118,9 @@ const columns: ColumnDef<Discipline>[] = [
|
|||||||
cell: ({ row }) => {
|
cell: ({ row }) => {
|
||||||
const contract = row.original.contract;
|
const contract = row.original.contract;
|
||||||
return contract ? (
|
return contract ? (
|
||||||
<span>{contract.contractName} ({contract.contractCode})</span>
|
<span>
|
||||||
|
{contract.contractName} ({contract.contractCode})
|
||||||
|
</span>
|
||||||
) : (
|
) : (
|
||||||
<span className="text-muted-foreground">-</span>
|
<span className="text-muted-foreground">-</span>
|
||||||
);
|
);
|
||||||
@@ -153,10 +160,9 @@ export const contractService = {
|
|||||||
## TypeScript Interfaces
|
## TypeScript Interfaces
|
||||||
|
|
||||||
```tsx
|
```tsx
|
||||||
// types/entities.ts
|
// ✅ CORRECT — types/entities.ts
|
||||||
export interface BaseEntity {
|
export interface BaseEntity {
|
||||||
id?: number; // Internal INT - may be undefined
|
publicId?: string; // UUID — ใช้ตัวนี้เท่านั้น (ไม่มี INT id ใน interface)
|
||||||
publicId?: string; // UUID - use this for API calls
|
|
||||||
createdAt?: string;
|
createdAt?: string;
|
||||||
updatedAt?: string;
|
updatedAt?: string;
|
||||||
}
|
}
|
||||||
@@ -170,14 +176,12 @@ export interface Project extends BaseEntity {
|
|||||||
export interface Contract extends BaseEntity {
|
export interface Contract extends BaseEntity {
|
||||||
contractCode: string;
|
contractCode: string;
|
||||||
contractName: string;
|
contractName: string;
|
||||||
projectId?: number; // Internal INT FK
|
project?: Project; // Relation (nested entity)
|
||||||
projectUuid?: string; // UUID for DTOs
|
|
||||||
project?: Project; // Relation
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// DTOs
|
// DTO (input only — รับ UUID จาก form)
|
||||||
export interface CreateContractDto {
|
export interface CreateContractDto {
|
||||||
projectUuid: string; // Accept UUID from form
|
projectUuid: string; // UUID string from select
|
||||||
contractCode: string;
|
contractCode: string;
|
||||||
contractName: string;
|
contractName: string;
|
||||||
}
|
}
|
||||||
@@ -215,9 +219,7 @@ export function ContractForm() {
|
|||||||
|
|
||||||
return (
|
return (
|
||||||
<Form {...form}>
|
<Form {...form}>
|
||||||
<form onSubmit={form.handleSubmit(onSubmit)}>
|
<form onSubmit={form.handleSubmit(onSubmit)}>{/* Form fields */}</form>
|
||||||
{/* Form fields */}
|
|
||||||
</form>
|
|
||||||
</Form>
|
</Form>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -238,12 +240,13 @@ export default async function ContractPage({ params }: { params: Promise<{ id: s
|
|||||||
|
|
||||||
## Common Pitfalls
|
## Common Pitfalls
|
||||||
|
|
||||||
| Pitfall | Wrong | Right |
|
| Pitfall | ❌ Wrong | ✅ Right |
|
||||||
|---------|-------|-------|
|
| ---------------------------- | ------------------------------------------------ | --------------------------------- |
|
||||||
| Assuming `entity.id` exists | `key={entity.id}` | `key={entity.publicId ?? entity.id}` |
|
| Using INT `id` | `key={entity.id}` | `key={entity.publicId}` |
|
||||||
| parseInt on UUID | `parseInt(projectId)` | `projectId` (string) |
|
| parseInt on UUID | `parseInt(projectId)` | `projectId` (string) |
|
||||||
| Field name mismatch | `name="project_id"` | `name="projectUuid"` |
|
| Field name mismatch | `name="project_id"` | `name="projectUuid"` |
|
||||||
| Missing fallback | `value={entity.publicId}` | `value={entity.publicId ?? entity.id ?? ''}` |
|
| `id ?? ''` fallback | `value={publicId ?? id ?? ''}` | `value={publicId ?? ''}` |
|
||||||
|
| `uuid` + `publicId` together | `interface { uuid?: string; publicId?: string }` | `interface { publicId?: string }` |
|
||||||
|
|
||||||
## Reference
|
## Reference
|
||||||
|
|
||||||
|
|||||||
@@ -0,0 +1,108 @@
|
|||||||
|
# 🧠 NAP-DMS Agent Skills (v1.8.9)
|
||||||
|
|
||||||
|
ไฟล์นี้กำหนดทักษะและความสามารถเฉพาะทางของ Document Intelligence Engine สำหรับโครงการ LCBP3 v1.8.9 เพื่อรักษามาตรฐานสูงสุดด้าน Security และ Data Integrity
|
||||||
|
|
||||||
|
**Status**: Production Ready | **Last Updated**: 2026-04-22 | **Total Skills**: 20
|
||||||
|
|
||||||
|
> 📌 Shared context for all speckit-\* skills: see [`_LCBP3-CONTEXT.md`](./_LCBP3-CONTEXT.md).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🏗️ Architectural & Data Integrity
|
||||||
|
|
||||||
|
- **Identifier Strategy Mastery (ADR-019 — March 2026):**
|
||||||
|
- บังคับใช้ **UUIDv7** เป็น Public ID; entity สืบทอดจาก `UuidBaseEntity` และเปิด `publicId` **ตรงๆ** (ห้ามใช้ `@Expose({ name: 'id' })` rename)
|
||||||
|
- ตรวจสอบและป้องกันการใช้ `parseInt()`, `Number()`, หรือ `+` กับ UUID ทั้ง backend/frontend
|
||||||
|
- ตรวจสอบว่า Entity มีการใช้ `@Exclude()` บน Primary Key `INT AUTO_INCREMENT` เพื่อไม่ให้หลุดออกไปยัง API
|
||||||
|
- Frontend ใช้ `publicId` ตรงๆ — **ห้าม** `id ?? ''` fallback หรือมี `uuid?: string` คู่กับ `publicId` ใน interface
|
||||||
|
- **Strict Validation Engine:**
|
||||||
|
- บังคับใช้ **Zod** สำหรับการทำ Form Validation ฝั่ง Frontend
|
||||||
|
- บังคับใช้ **class-validator** สำหรับ Backend DTOs
|
||||||
|
- ตรวจสอบการส่ง **Idempotency-Key** ใน Header สำหรับทุก Mutation Request (POST/PUT/PATCH)
|
||||||
|
|
||||||
|
## ⚙️ Workflow & Concurrency Control
|
||||||
|
|
||||||
|
- **DMS Workflow Engine Proficiency:**
|
||||||
|
- มีความเชี่ยวชาญใน **DSL-based state machines**; ตรวจสอบทุกการเปลี่ยนสถานะเอกสารเทียบกับกฎใน DSL Parser เสมอ
|
||||||
|
- ป้องกันการอนุมัติซ้ำซ้อนโดยการตรวจสอบสถานะปัจจุบันจากฐานข้อมูลก่อนเริ่ม Logic การเปลี่ยน State ทุกครั้ง
|
||||||
|
- **Collision-Free Numbering (ADR-002):**
|
||||||
|
- ใช้ทักษะการทำ **Distributed Locking** ผ่าน **Redis Redlock** ร่วมกับ TypeORM `@VersionColumn` สำหรับการเจนเลขที่เอกสาร (Document Numbering)
|
||||||
|
- ห้ามเจนเลขโดยใช้ Logic ฝั่ง Application เพียงอย่างเดียวเด็ดขาด
|
||||||
|
- **Asynchronous Task Orchestration (ADR-008):**
|
||||||
|
- แยกงานที่ใช้เวลานาน (เช่น การส่ง Notification, การทำ Correspondence Routing) ไปทำที่ **BullMQ** เท่านั้น
|
||||||
|
|
||||||
|
## 🛡️ Security & Integrity Audit
|
||||||
|
|
||||||
|
- **RBAC Matrix Enforcement (ADR-016):**
|
||||||
|
- บังคับใช้ **JwtAuthGuard**, **RolesGuard** และ **CASL AbilityFactory** ในทุก Controller ใหม่
|
||||||
|
- ตรวจสอบการมีอยู่ของ `AuditLogInterceptor` สำหรับทุก API ที่มีการเปลี่ยนแปลงข้อมูล
|
||||||
|
- **Secure File Lifecycle:**
|
||||||
|
- ใช้ Logic **Two-Phase Upload**: Upload → Temp → ClamAV Scan → Commit → Permanent
|
||||||
|
- บังคับใช้ Whitelist File Extension และ Max Size 50MB ตามที่กำหนดใน ADR-016
|
||||||
|
|
||||||
|
## 🤖 AI Boundary & Privacy (ADR-018/020)
|
||||||
|
|
||||||
|
- **Data Isolation:**
|
||||||
|
- รับรองว่าฟีเจอร์ AI จะรันผ่าน **Ollama (On-premises)** เท่านั้น และไม่ส่งข้อมูลออกนอกเน็ตเวิร์ก
|
||||||
|
- AI จะเข้าถึงข้อมูลผ่าน **DMS API** เท่านั้น (ห้ามต่อ Database หรือ Storage โดยตรง)
|
||||||
|
- **Human-in-the-loop Validation:**
|
||||||
|
- ออกแบบให้ผลลัพธ์จาก AI (เช่น การดึง Metadata เอกสาร) ต้องผ่านการยืนยันจาก User ก่อนบันทึกลงระบบเสมอ
|
||||||
|
|
||||||
|
## 🏷️ Domain Terminology Consistency
|
||||||
|
|
||||||
|
- **Term Correction:** แก้ไขคำศัพท์ให้ถูกต้องตาม Glossary ทันที (เช่น เปลี่ยน Letter เป็น **Correspondence**, Approval Flow เป็น **Workflow Engine**)
|
||||||
|
- **i18n Guidelines:** ห้ามเขียน Thai/English String ลงใน Component โดยตรง ต้องใช้ i18n Keys เท่านั้น
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔄 Skill Dependency Matrix
|
||||||
|
|
||||||
|
| Skill | Dependencies | Handoffs To | Notes |
|
||||||
|
| -------------------------- | -------------------- | -------------------------------- | ----------------------------- |
|
||||||
|
| **speckit-constitution** | None | speckit-specify | Project governance foundation |
|
||||||
|
| **speckit-specify** | speckit-constitution | speckit-clarify | Feature specification |
|
||||||
|
| **speckit-clarify** | speckit-specify | speckit-plan | Resolve ambiguities |
|
||||||
|
| **speckit-plan** | speckit-clarify | speckit-tasks, speckit-checklist | Technical design |
|
||||||
|
| **speckit-tasks** | speckit-plan | speckit-implement | Task breakdown |
|
||||||
|
| **speckit-implement** | speckit-tasks | speckit-checker | Code implementation |
|
||||||
|
| **speckit-checker** | speckit-implement | speckit-tester | Static analysis |
|
||||||
|
| **speckit-tester** | speckit-checker | speckit-reviewer | Test execution |
|
||||||
|
| **speckit-reviewer** | speckit-tester | speckit-validate | Code review |
|
||||||
|
| **speckit-validate** | speckit-reviewer | None | Requirements validation |
|
||||||
|
| **speckit-analyze** | speckit-tasks | None | Cross-artifact consistency |
|
||||||
|
| **speckit-migrate** | None | speckit-plan | Legacy code import |
|
||||||
|
| **speckit-quizme** | speckit-specify | speckit-plan | Logic validation |
|
||||||
|
| **speckit-diff** | None | speckit-plan | Version comparison |
|
||||||
|
| **speckit-status** | None | None | Progress tracking |
|
||||||
|
| **speckit-taskstoissues** | speckit-tasks | None | Issue sync |
|
||||||
|
| **speckit-checklist** | speckit-plan | None | Requirements validation |
|
||||||
|
| **nestjs-best-practices** | None | speckit-implement | Backend patterns |
|
||||||
|
| **next-best-practices** | None | speckit-implement | Frontend patterns |
|
||||||
|
| **speckit-security-audit** | None | speckit-reviewer | Security validation |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🛠️ Skill Health Monitoring
|
||||||
|
|
||||||
|
### Health Check Scripts (from repo root)
|
||||||
|
|
||||||
|
- **Bash**: `./.agents/scripts/bash/audit-skills.sh` - Comprehensive skill health audit
|
||||||
|
- **PowerShell**: `./.agents/scripts/powershell/audit-skills.ps1` - Windows equivalent
|
||||||
|
|
||||||
|
### Validation Scripts
|
||||||
|
|
||||||
|
- **Version Check**: `./.agents/scripts/bash/validate-versions.sh` - Ensure version consistency
|
||||||
|
- **Workflow Sync**: `./.agents/scripts/bash/sync-workflows.sh` - Verify workflow integration
|
||||||
|
|
||||||
|
### Health Metrics
|
||||||
|
|
||||||
|
- **Total Skills**: 20 implemented
|
||||||
|
- **Version Alignment**: v1.8.9 across all skills
|
||||||
|
- **Template Coverage**: 100% for skills requiring templates
|
||||||
|
- **Documentation**: Complete front matter + shared `_LCBP3-CONTEXT.md` appendix
|
||||||
|
|
||||||
|
### Maintenance Schedule
|
||||||
|
|
||||||
|
- **Daily**: Run `audit-skills.sh` for health monitoring
|
||||||
|
- **Weekly**: Run `validate-versions.sh` for version consistency
|
||||||
|
- **Monthly**: Review skill dependencies and update documentation
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-analyze
|
name: speckit-analyze
|
||||||
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on:
|
depends-on:
|
||||||
- speckit-tasks
|
- speckit-tasks
|
||||||
---
|
---
|
||||||
@@ -28,7 +28,7 @@ Identify inconsistencies, duplications, ambiguities, and underspecified items ac
|
|||||||
|
|
||||||
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
|
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
|
||||||
|
|
||||||
**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit-analyze`.
|
**Constitution Authority**: The project constitution (`AGENTS.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit-analyze`.
|
||||||
|
|
||||||
### Steps
|
### Steps
|
||||||
|
|
||||||
@@ -72,7 +72,7 @@ Load only the minimal necessary context from each artifact:
|
|||||||
|
|
||||||
**From constitution:**
|
**From constitution:**
|
||||||
|
|
||||||
- Load `.specify/memory/constitution.md` for principle validation
|
- Load `AGENTS.md` for principle validation
|
||||||
|
|
||||||
### 3. Build Semantic Models
|
### 3. Build Semantic Models
|
||||||
|
|
||||||
@@ -192,3 +192,15 @@ Ask the user: "Would you like me to suggest concrete remediation edits for the t
|
|||||||
## Context
|
## Context
|
||||||
|
|
||||||
{{args}}
|
{{args}}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-checker
|
name: speckit-checker
|
||||||
description: Run static analysis tools and aggregate results.
|
description: Run static analysis tools and aggregate results.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on: []
|
depends-on: []
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -157,3 +157,15 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
|
|||||||
- **Be Actionable**: Every issue should have a clear fix path
|
- **Be Actionable**: Every issue should have a clear fix path
|
||||||
- **Don't Duplicate**: Dedupe issues found by multiple tools
|
- **Don't Duplicate**: Dedupe issues found by multiple tools
|
||||||
- **Respect Configs**: Honor project's existing linter configs
|
- **Respect Configs**: Honor project's existing linter configs
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-checklist
|
name: speckit-checklist
|
||||||
description: Generate a custom checklist for the current feature based on user requirements.
|
description: Generate a custom checklist for the current feature based on user requirements.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
---
|
---
|
||||||
|
|
||||||
## Checklist Purpose: "Unit Tests for English"
|
## Checklist Purpose: "Unit Tests for English"
|
||||||
@@ -300,3 +300,15 @@ Sample items:
|
|||||||
- Correct: Validation of requirement quality
|
- Correct: Validation of requirement quality
|
||||||
- Wrong: "Does it do X?"
|
- Wrong: "Does it do X?"
|
||||||
- Correct: "Is X clearly specified?"
|
- Correct: "Is X clearly specified?"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-clarify
|
name: speckit-clarify
|
||||||
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on:
|
depends-on:
|
||||||
- speckit-specify
|
- speckit-specify
|
||||||
handoffs:
|
handoffs:
|
||||||
@@ -189,3 +189,15 @@ Behavior rules:
|
|||||||
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
|
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
|
||||||
|
|
||||||
Context for prioritization: {{args}}
|
Context for prioritization: {{args}}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-constitution
|
name: speckit-constitution
|
||||||
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
handoffs:
|
handoffs:
|
||||||
- label: Build Specification
|
- label: Build Specification
|
||||||
agent: speckit-specify
|
agent: speckit-specify
|
||||||
@@ -24,11 +24,11 @@ You are the **Antigravity Governance Architect**. Your role is to establish and
|
|||||||
|
|
||||||
### Outline
|
### Outline
|
||||||
|
|
||||||
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
|
You are updating the project constitution at `AGENTS.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
|
||||||
|
|
||||||
Follow this execution flow:
|
Follow this execution flow:
|
||||||
|
|
||||||
1. Load the existing constitution template at `memory/constitution.md`.
|
1. Load the existing constitution template at `AGENTS.md`.
|
||||||
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
|
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
|
||||||
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
|
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
|
||||||
|
|
||||||
@@ -49,10 +49,10 @@ Follow this execution flow:
|
|||||||
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
|
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
|
||||||
|
|
||||||
4. Consistency propagation checklist (convert prior checklist into active validations):
|
4. Consistency propagation checklist (convert prior checklist into active validations):
|
||||||
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
|
- Read `.agents/skills/speckit-plan/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
|
||||||
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
|
- Read `.agents/skills/speckit-specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
|
||||||
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
|
- Read `.agents/skills/speckit-tasks/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
|
||||||
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
|
- Read each command file in `.agents/skills/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
|
||||||
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
|
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
|
||||||
|
|
||||||
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
|
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
|
||||||
@@ -69,7 +69,7 @@ Follow this execution flow:
|
|||||||
- Dates ISO format YYYY-MM-DD.
|
- Dates ISO format YYYY-MM-DD.
|
||||||
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
|
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
|
||||||
|
|
||||||
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
|
7. Write the completed constitution back to `AGENTS.md` (overwrite).
|
||||||
|
|
||||||
8. Output a final summary to the user with:
|
8. Output a final summary to the user with:
|
||||||
- New version and bump rationale.
|
- New version and bump rationale.
|
||||||
@@ -87,4 +87,16 @@ If the user supplies partial updates (e.g., only one principle revision), still
|
|||||||
|
|
||||||
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
|
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
|
||||||
|
|
||||||
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
|
Do not create a new template; always operate on the existing `AGENTS.md` file.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-diff
|
name: speckit-diff
|
||||||
description: Compare two versions of a spec or plan to highlight changes.
|
description: Compare two versions of a spec or plan to highlight changes.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on: []
|
depends-on: []
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -84,3 +84,15 @@ Compare two versions of a specification artifact and produce a structured diff r
|
|||||||
- **Highlight Impact**: Explain what each change means for implementation
|
- **Highlight Impact**: Explain what each change means for implementation
|
||||||
- **Flag Breaking Changes**: Any change that invalidates existing work
|
- **Flag Breaking Changes**: Any change that invalidates existing work
|
||||||
- **Ignore Whitespace**: Focus on semantic changes, not formatting
|
- **Ignore Whitespace**: Focus on semantic changes, not formatting
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-implement
|
name: speckit-implement
|
||||||
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md (with Ironclad Anti-Regression Protocols)
|
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md (with Ironclad Anti-Regression Protocols)
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on:
|
depends-on:
|
||||||
- speckit-tasks
|
- speckit-tasks
|
||||||
---
|
---
|
||||||
@@ -81,7 +81,7 @@ At the start of execution and after every 3 modifications:
|
|||||||
|
|
||||||
### Outline
|
### Outline
|
||||||
|
|
||||||
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
1. Run `../scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
|
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
|
||||||
- Scan all checklist files in the checklists/ directory
|
- Scan all checklist files in the checklists/ directory
|
||||||
@@ -246,3 +246,15 @@ At the start of execution and after every 3 modifications:
|
|||||||
---
|
---
|
||||||
|
|
||||||
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit-tasks` first to regenerate the task list.
|
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit-tasks` first to regenerate the task list.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-migrate
|
name: speckit-migrate
|
||||||
description: Migrate existing projects into the speckit structure by generating spec.md, plan.md, and tasks.md from existing code.
|
description: Migrate existing projects into the speckit structure by generating spec.md, plan.md, and tasks.md from existing code.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on: []
|
depends-on: []
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -116,3 +116,15 @@ Analyze an existing codebase and generate speckit artifacts (spec.md, plan.md, t
|
|||||||
- **Preserve Intent**: Use code comments and naming to understand purpose
|
- **Preserve Intent**: Use code comments and naming to understand purpose
|
||||||
- **Flag TODOs**: Any TODO/FIXME/HACK in code becomes an open task
|
- **Flag TODOs**: Any TODO/FIXME/HACK in code becomes an open task
|
||||||
- **Be Conservative**: When unsure, ask rather than assume
|
- **Be Conservative**: When unsure, ask rather than assume
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-plan
|
name: speckit-plan
|
||||||
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on:
|
depends-on:
|
||||||
- speckit-specify
|
- speckit-specify
|
||||||
handoffs:
|
handoffs:
|
||||||
@@ -32,7 +32,7 @@ You are the **Antigravity System Architect**. Your role is to bridge the gap bet
|
|||||||
|
|
||||||
1. **Setup**: Run `../scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
1. **Setup**: Run `../scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template from `templates/plan-template.md`.
|
2. **Load context**: Read FEATURE_SPEC and `AGENTS.md`. Load IMPL_PLAN template from `templates/plan-template.md`.
|
||||||
|
|
||||||
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
||||||
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
||||||
@@ -85,7 +85,7 @@ You are the **Antigravity System Architect**. Your role is to bridge the gap bet
|
|||||||
- Output OpenAPI/GraphQL schema to `/contracts/`
|
- Output OpenAPI/GraphQL schema to `/contracts/`
|
||||||
|
|
||||||
3. **Agent context update**:
|
3. **Agent context update**:
|
||||||
- Run `../scripts/bash/update-agent-context.sh gemini`
|
- Run `../scripts/bash/update-agent-context.sh windsurf`
|
||||||
- These scripts detect which AI agent is in use
|
- These scripts detect which AI agent is in use
|
||||||
- Update the appropriate agent-specific context file
|
- Update the appropriate agent-specific context file
|
||||||
- Add only new technology from current plan
|
- Add only new technology from current plan
|
||||||
@@ -97,3 +97,15 @@ You are the **Antigravity System Architect**. Your role is to bridge the gap bet
|
|||||||
|
|
||||||
- Use absolute paths
|
- Use absolute paths
|
||||||
- ERROR on gate failures or unresolved clarifications
|
- ERROR on gate failures or unresolved clarifications
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -3,7 +3,7 @@
|
|||||||
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
|
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
|
||||||
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
|
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
|
||||||
|
|
||||||
**Note**: This template is filled in by the `/speckit-plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
|
**Note**: This template is filled in by the `/speckit-plan` command. See `.agents/skills/plan.md` for the execution workflow.
|
||||||
|
|
||||||
## Summary
|
## Summary
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-quizme
|
name: speckit-quizme
|
||||||
description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues.
|
description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
handoffs:
|
handoffs:
|
||||||
- label: Clarify Spec Requirements
|
- label: Clarify Spec Requirements
|
||||||
agent: speckit-clarify
|
agent: speckit-clarify
|
||||||
@@ -65,3 +65,15 @@ Execution steps:
|
|||||||
- **Be a Skeptic**: Don't assume the happy path works.
|
- **Be a Skeptic**: Don't assume the happy path works.
|
||||||
- **Focus on "When" and "If"**: When high load, If network drops, When concurrent edits.
|
- **Focus on "When" and "If"**: When high load, If network drops, When concurrent edits.
|
||||||
- **Don't be annoying**: Focus on _critical_ flaws, not nitpicks.
|
- **Don't be annoying**: Focus on _critical_ flaws, not nitpicks.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-reviewer
|
name: speckit-reviewer
|
||||||
description: Perform code review with actionable feedback and suggestions.
|
description: Perform code review with actionable feedback and suggestions.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on: []
|
depends-on: []
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -142,3 +142,15 @@ Review code changes and provide structured feedback with severity levels.
|
|||||||
- **Be Balanced**: Mention what's good, not just what's wrong
|
- **Be Balanced**: Mention what's good, not just what's wrong
|
||||||
- **Prioritize**: Focus on real issues, not style nitpicks
|
- **Prioritize**: Focus on real issues, not style nitpicks
|
||||||
- **Be Educational**: Explain WHY something is an issue
|
- **Be Educational**: Explain WHY something is an issue
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-security-audit
|
name: speckit-security-audit
|
||||||
description: Perform a security-focused audit of the codebase against OWASP Top 10, CASL authorization, and LCBP3-DMS security requirements.
|
description: Perform a security-focused audit of the codebase against OWASP Top 10, CASL authorization, and LCBP3-DMS security requirements.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on:
|
depends-on:
|
||||||
- speckit-checker
|
- speckit-checker
|
||||||
---
|
---
|
||||||
@@ -12,16 +12,16 @@ You are the **Antigravity Security Sentinel**. Your mission is to identify secur
|
|||||||
|
|
||||||
## Task
|
## Task
|
||||||
|
|
||||||
Perform a comprehensive security audit covering OWASP Top 10, CASL permission enforcement, file upload safety, and project-specific security rules defined in `specs/06-Decision-Records/ADR-016-security.md`.
|
Perform a comprehensive security audit covering OWASP Top 10, CASL permission enforcement, file upload safety, and project-specific security rules defined in `specs/06-Decision-Records/ADR-016-security-authentication.md`.
|
||||||
|
|
||||||
## Context Loading
|
## Context Loading
|
||||||
|
|
||||||
Before auditing, load the security context:
|
Before auditing, load the security context:
|
||||||
|
|
||||||
1. Read `specs/06-Decision-Records/ADR-016-security.md` for project security decisions
|
1. Read `specs/06-Decision-Records/ADR-016-security-authentication.md` for project security decisions
|
||||||
2. Read `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` for backend security patterns
|
2. Read `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` for backend security patterns
|
||||||
3. Read `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` for CASL permission definitions
|
3. Read `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql` for CASL permission definitions
|
||||||
4. Read `GEMINI.md` for security rules (Section: Security & Integrity Rules)
|
4. Read `AGENTS.md` for security rules (Section: Security Rules Non-Negotiable + Security & Integrity Audit Protocol)
|
||||||
|
|
||||||
## Execution Steps
|
## Execution Steps
|
||||||
|
|
||||||
@@ -44,7 +44,7 @@ Scan the `backend/src/` directory for each OWASP category:
|
|||||||
|
|
||||||
### Phase 2: CASL Authorization Audit
|
### Phase 2: CASL Authorization Audit
|
||||||
|
|
||||||
1. **Load permission matrix** from `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql`
|
1. **Load permission matrix** from `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql`
|
||||||
2. **Scan all controllers** for `@UseGuards(CaslAbilityGuard)` coverage:
|
2. **Scan all controllers** for `@UseGuards(CaslAbilityGuard)` coverage:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -197,3 +197,15 @@ Generate a structured report:
|
|||||||
- **No False Confidence**: If a check is inconclusive, mark it as "⚠️ Needs Manual Review" rather than passing.
|
- **No False Confidence**: If a check is inconclusive, mark it as "⚠️ Needs Manual Review" rather than passing.
|
||||||
- **LCBP3-Specific**: Prioritize project-specific rules (idempotency, ClamAV, Redlock) over generic checks.
|
- **LCBP3-Specific**: Prioritize project-specific rules (idempotency, ClamAV, Redlock) over generic checks.
|
||||||
- **Frontend Too**: If scope includes frontend, also check for XSS in React components, unescaped user data, and exposed API keys.
|
- **Frontend Too**: If scope includes frontend, also check for XSS in React components, unescaped user data, and exposed API keys.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-specify
|
name: speckit-specify
|
||||||
description: Create or update the feature specification from a natural language feature description.
|
description: Create or update the feature specification from a natural language feature description.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
handoffs:
|
handoffs:
|
||||||
- label: Build Technical Plan
|
- label: Build Technical Plan
|
||||||
agent: speckit-plan
|
agent: speckit-plan
|
||||||
@@ -64,8 +64,8 @@ Given that feature description, do this:
|
|||||||
|
|
||||||
d. Run the script `../scripts/bash/create-new-feature.sh --json "{{args}}"` with the calculated number and short-name:
|
d. Run the script `../scripts/bash/create-new-feature.sh --json "{{args}}"` with the calculated number and short-name:
|
||||||
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
|
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
|
||||||
- Bash example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" --json --number 5 --short-name "user-auth" "Add user authentication"`
|
- Bash example: `.agents/scripts/bash/create-new-feature.sh --json "{{args}}" --number 5 --short-name "user-auth" "Add user authentication"`
|
||||||
- PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
|
- PowerShell example: `.agents/scripts/powershell/create-new-feature.ps1 -Json -Args '{{args}}' -Number 5 -ShortName "user-auth" "Add user authentication"`
|
||||||
|
|
||||||
**IMPORTANT**:
|
**IMPORTANT**:
|
||||||
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
|
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
|
||||||
@@ -262,3 +262,15 @@ Success criteria must be:
|
|||||||
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
|
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
|
||||||
- "React components render efficiently" (framework-specific)
|
- "React components render efficiently" (framework-specific)
|
||||||
- "Redis cache hit rate above 80%" (technology-specific)
|
- "Redis cache hit rate above 80%" (technology-specific)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../\_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-status
|
name: speckit-status
|
||||||
description: Display a dashboard showing feature status, completion percentage, and blockers.
|
description: Display a dashboard showing feature status, completion percentage, and blockers.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on: []
|
depends-on: []
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -109,3 +109,15 @@ Generate a dashboard view of all features and their completion status.
|
|||||||
- **Be Visual**: Use progress bars and tables
|
- **Be Visual**: Use progress bars and tables
|
||||||
- **Be Actionable**: Every status should have a "next action"
|
- **Be Actionable**: Every status should have a "next action"
|
||||||
- **Be Fast**: Cache nothing, always recalculate
|
- **Be Fast**: Cache nothing, always recalculate
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-tasks
|
name: speckit-tasks
|
||||||
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on:
|
depends-on:
|
||||||
- speckit-plan
|
- speckit-plan
|
||||||
handoffs:
|
handoffs:
|
||||||
@@ -145,3 +145,15 @@ Every task MUST strictly follow this format:
|
|||||||
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
|
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
|
||||||
- Each phase should be a complete, independently testable increment
|
- Each phase should be a complete, independently testable increment
|
||||||
- **Final Phase**: Polish & Cross-Cutting Concerns
|
- **Final Phase**: Polish & Cross-Cutting Concerns
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-taskstoissues
|
name: speckit-taskstoissues
|
||||||
description: Convert existing tasks into actionable, dependency-ordered issues for the feature based on available design artifacts.
|
description: Convert existing tasks into actionable, dependency-ordered issues for the feature based on available design artifacts.
|
||||||
version: 1.1.0
|
version: 1.8.9
|
||||||
depends-on:
|
depends-on:
|
||||||
- speckit-tasks
|
- speckit-tasks
|
||||||
tools: ['github/github-mcp-server/issue_write']
|
tools: ['github/github-mcp-server/issue_write']
|
||||||
@@ -204,3 +204,15 @@ Convert all tasks from `tasks.md` into well-structured issues on the appropriate
|
|||||||
- **Label Consistency**: Use a consistent label taxonomy across all issues
|
- **Label Consistency**: Use a consistent label taxonomy across all issues
|
||||||
- **Platform Safety**: Never create issues on repos that don't match the git remote
|
- **Platform Safety**: Never create issues on repos that don't match the git remote
|
||||||
- **Dry Run Support**: Always support `--dry-run` to preview before creating
|
- **Dry Run Support**: Always support `--dry-run` to preview before creating
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-tester
|
name: speckit-tester
|
||||||
description: Execute tests, measure coverage, and report results.
|
description: Execute tests, measure coverage, and report results.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on: []
|
depends-on: []
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -120,3 +120,15 @@ Detect the project's test framework, execute tests, and generate a comprehensive
|
|||||||
- **Preserve Output**: Keep full test output for debugging
|
- **Preserve Output**: Keep full test output for debugging
|
||||||
- **Be Helpful**: Suggest fixes for common failure patterns
|
- **Be Helpful**: Suggest fixes for common failure patterns
|
||||||
- **Respect Timeouts**: Set reasonable timeout (5 min default)
|
- **Respect Timeouts**: Set reasonable timeout (5 min default)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: speckit-validate
|
name: speckit-validate
|
||||||
description: Validate that implementation matches specification requirements.
|
description: Validate that implementation matches specification requirements.
|
||||||
version: 1.0.0
|
version: 1.8.9
|
||||||
depends-on:
|
depends-on:
|
||||||
- speckit-implement
|
- speckit-implement
|
||||||
---
|
---
|
||||||
@@ -92,3 +92,15 @@ Post-implementation validation that compares code against spec requirements.
|
|||||||
- **Be Fair**: Semantic matching, not just keyword matching
|
- **Be Fair**: Semantic matching, not just keyword matching
|
||||||
- **Be Actionable**: Every gap should have a clear fix recommendation
|
- **Be Actionable**: Every gap should have a clear fix recommendation
|
||||||
- **Don't Block on Style**: Focus on functional coverage, not code style
|
- **Don't Block on Style**: Focus on functional coverage, not code style
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LCBP3-DMS Context (MUST LOAD)
|
||||||
|
|
||||||
|
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
|
||||||
|
|
||||||
|
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
|
||||||
|
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
|
||||||
|
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
|
||||||
|
- Helper script real paths
|
||||||
|
- Commit checklist
|
||||||
@@ -0,0 +1,241 @@
|
|||||||
|
/**
|
||||||
|
* skill-integration.test.js - Integration tests for .agents skills
|
||||||
|
* Part of LCBP3-DMS Phase 3 enhancements
|
||||||
|
*/
|
||||||
|
|
||||||
|
const fs = require('fs');
|
||||||
|
const path = require('path');
|
||||||
|
const { execSync } = require('child_process');
|
||||||
|
|
||||||
|
// Test configuration
|
||||||
|
const BASE_DIR = path.resolve(__dirname, '..');
|
||||||
|
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
|
||||||
|
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
|
||||||
|
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
|
||||||
|
|
||||||
|
// Test utilities
|
||||||
|
class SkillTestSuite {
|
||||||
|
constructor() {
|
||||||
|
this.results = {
|
||||||
|
passed: 0,
|
||||||
|
failed: 0,
|
||||||
|
errors: []
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
log(message, type = 'info') {
|
||||||
|
const colors = {
|
||||||
|
info: '\x1b[36m', // Cyan
|
||||||
|
pass: '\x1b[32m', // Green
|
||||||
|
fail: '\x1b[31m', // Red
|
||||||
|
warn: '\x1b[33m', // Yellow
|
||||||
|
reset: '\x1b[0m'
|
||||||
|
};
|
||||||
|
|
||||||
|
const color = colors[type] || colors.info;
|
||||||
|
console.log(`${color}${message}${colors.reset}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
assert(condition, message) {
|
||||||
|
if (condition) {
|
||||||
|
this.log(` PASS: ${message}`, 'pass');
|
||||||
|
this.results.passed++;
|
||||||
|
return true;
|
||||||
|
} else {
|
||||||
|
this.log(` FAIL: ${message}`, 'fail');
|
||||||
|
this.results.failed++;
|
||||||
|
this.results.errors.push(message);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
testDirectoryExists(dirPath, description) {
|
||||||
|
const exists = fs.existsSync(dirPath);
|
||||||
|
this.assert(exists, `${description} exists at ${dirPath}`);
|
||||||
|
return exists;
|
||||||
|
}
|
||||||
|
|
||||||
|
testFileExists(filePath, description) {
|
||||||
|
const exists = fs.existsSync(filePath);
|
||||||
|
this.assert(exists, `${description} exists at ${filePath}`);
|
||||||
|
return exists;
|
||||||
|
}
|
||||||
|
|
||||||
|
testFileContent(filePath, pattern, description) {
|
||||||
|
if (!fs.existsSync(filePath)) {
|
||||||
|
this.assert(false, `${description} - file not found: ${filePath}`);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = fs.readFileSync(filePath, 'utf8');
|
||||||
|
const matches = content.match(pattern);
|
||||||
|
this.assert(matches !== null, `${description} - pattern found in ${filePath}`);
|
||||||
|
return matches !== null;
|
||||||
|
} catch (error) {
|
||||||
|
this.assert(false, `${description} - error reading file: ${error.message}`);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
runScript(scriptPath, description) {
|
||||||
|
try {
|
||||||
|
const output = execSync(scriptPath, { encoding: 'utf8', cwd: BASE_DIR });
|
||||||
|
this.log(` SCRIPT: ${description} executed successfully`, 'pass');
|
||||||
|
return { success: true, output };
|
||||||
|
} catch (error) {
|
||||||
|
this.log(` SCRIPT: ${description} failed - ${error.message}`, 'fail');
|
||||||
|
this.results.failed++;
|
||||||
|
this.results.errors.push(`${description}: ${error.message}`);
|
||||||
|
return { success: false, error: error.message };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test suite implementation
|
||||||
|
const testSuite = new SkillTestSuite();
|
||||||
|
|
||||||
|
function runAllTests() {
|
||||||
|
testSuite.log('=== .agents Integration Test Suite ===', 'info');
|
||||||
|
testSuite.log(`Base directory: ${BASE_DIR}`, 'info');
|
||||||
|
testSuite.log(`Started: ${new Date().toISOString()}`, 'info');
|
||||||
|
testSuite.log('');
|
||||||
|
|
||||||
|
// Test 1: Directory Structure
|
||||||
|
testSuite.log('Test 1: Directory Structure', 'info');
|
||||||
|
testSuite.testDirectoryExists(AGENTS_DIR, '.agents directory');
|
||||||
|
testSuite.testDirectoryExists(SKILLS_DIR, 'skills directory');
|
||||||
|
testSuite.testDirectoryExists(WORKFLOWS_DIR, 'workflows directory');
|
||||||
|
testSuite.testDirectoryExists(path.join(AGENTS_DIR, 'scripts'), 'scripts directory');
|
||||||
|
testSuite.testDirectoryExists(path.join(AGENTS_DIR, 'rules'), 'rules directory');
|
||||||
|
testSuite.log('');
|
||||||
|
|
||||||
|
// Test 2: Core Files
|
||||||
|
testSuite.log('Test 2: Core Files', 'info');
|
||||||
|
testSuite.testFileExists(path.join(AGENTS_DIR, 'README.md'), 'README.md');
|
||||||
|
testSuite.testFileExists(path.join(SKILLS_DIR, 'VERSION'), 'skills VERSION file');
|
||||||
|
testSuite.testFileExists(path.join(SKILLS_DIR, 'skills.md'), 'skills.md documentation');
|
||||||
|
testSuite.log('');
|
||||||
|
|
||||||
|
// Test 3: Script Files
|
||||||
|
testSuite.log('Test 3: Validation Scripts', 'info');
|
||||||
|
testSuite.testFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'validate-versions.sh'), 'bash validate-versions.sh');
|
||||||
|
testSuite.testFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'audit-skills.sh'), 'bash audit-skills.sh');
|
||||||
|
testSuite.testFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'sync-workflows.sh'), 'bash sync-workflows.sh');
|
||||||
|
testSuite.testFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'validate-versions.ps1'), 'powershell validate-versions.ps1');
|
||||||
|
testSuite.testFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'audit-skills.ps1'), 'powershell audit-skills.ps1');
|
||||||
|
testSuite.log('');
|
||||||
|
|
||||||
|
// Test 4: Version Consistency
|
||||||
|
testSuite.log('Test 4: Version Consistency', 'info');
|
||||||
|
testSuite.testFileContent(path.join(AGENTS_DIR, 'README.md'), /v1\.8\.6/, 'README.md version');
|
||||||
|
testSuite.testFileContent(path.join(SKILLS_DIR, 'VERSION'), /version: 1\.8\.6/, 'skills VERSION file');
|
||||||
|
testSuite.testFileContent(path.join(SKILLS_DIR, 'skills.md'), /v1\.8\.6/, 'skills.md version');
|
||||||
|
testSuite.testFileContent(path.join(AGENTS_DIR, 'rules', '00-project-context.md'), /v1\.8\.6/, 'project context version');
|
||||||
|
testSuite.log('');
|
||||||
|
|
||||||
|
// Test 5: Skills Structure
|
||||||
|
testSuite.log('Test 5: Skills Structure', 'info');
|
||||||
|
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
|
||||||
|
const itemPath = path.join(SKILLS_DIR, item);
|
||||||
|
return fs.statSync(itemPath).isDirectory() && item.startsWith('speckit-') || item === 'nestjs-best-practices' || item === 'next-best-practices';
|
||||||
|
});
|
||||||
|
|
||||||
|
testSuite.assert(skillDirs.length >= 20, `Found at least 20 skill directories (found ${skillDirs.length})`);
|
||||||
|
|
||||||
|
// Test a few key skills
|
||||||
|
const keySkills = ['speckit-plan', 'speckit-implement', 'speckit-specify', 'speckit-validate'];
|
||||||
|
keySkills.forEach(skill => {
|
||||||
|
const skillPath = path.join(SKILLS_DIR, skill);
|
||||||
|
const skillMdPath = path.join(skillPath, 'SKILL.md');
|
||||||
|
testSuite.testDirectoryExists(skillPath, `${skill} directory`);
|
||||||
|
testSuite.testFileExists(skillMdPath, `${skill} SKILL.md`);
|
||||||
|
|
||||||
|
if (fs.existsSync(skillMdPath)) {
|
||||||
|
testSuite.testFileContent(skillMdPath, /^name:/, `${skill} has name field`);
|
||||||
|
testSuite.testFileContent(skillMdPath, /^description:/, `${skill} has description field`);
|
||||||
|
testSuite.testFileContent(skillMdPath, /^version:/, `${skill} has version field`);
|
||||||
|
testSuite.testFileContent(skillMdPath, /^## Role$/, `${skill} has Role section`);
|
||||||
|
testSuite.testFileContent(skillMdPath, /^## Task$/, `${skill} has Task section`);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
testSuite.log('');
|
||||||
|
|
||||||
|
// Test 6: Workflows Structure
|
||||||
|
testSuite.log('Test 6: Workflows Structure', 'info');
|
||||||
|
const workflowFiles = fs.readdirSync(WORKFLOWS_DIR).filter(item => item.endsWith('.md'));
|
||||||
|
testSuite.assert(workflowFiles.length >= 20, `Found at least 20 workflow files (found ${workflowFiles.length})`);
|
||||||
|
|
||||||
|
// Test key workflows
|
||||||
|
const keyWorkflows = ['00-speckit.all.md', '02-speckit.specify.md', '04-speckit.plan.md', '07-speckit.implement.md'];
|
||||||
|
keyWorkflows.forEach(workflow => {
|
||||||
|
const workflowPath = path.join(WORKFLOWS_DIR, workflow);
|
||||||
|
testSuite.testFileExists(workflowPath, `${workflow} file`);
|
||||||
|
});
|
||||||
|
testSuite.log('');
|
||||||
|
|
||||||
|
// Test 7: Rules Structure
|
||||||
|
testSuite.log('Test 7: Rules Structure', 'info');
|
||||||
|
const rulesDir = path.join(AGENTS_DIR, 'rules');
|
||||||
|
const ruleFiles = fs.readdirSync(rulesDir).filter(item => item.endsWith('.md'));
|
||||||
|
testSuite.assert(ruleFiles.length >= 10, `Found at least 10 rule files (found ${ruleFiles.length})`);
|
||||||
|
|
||||||
|
// Test key rules
|
||||||
|
const keyRules = ['00-project-context.md', '01-adr-019-uuid.md', '02-security.md'];
|
||||||
|
keyRules.forEach(rule => {
|
||||||
|
const rulePath = path.join(rulesDir, rule);
|
||||||
|
testSuite.testFileExists(rulePath, `${rule} file`);
|
||||||
|
});
|
||||||
|
testSuite.log('');
|
||||||
|
|
||||||
|
// Test 8: Script Execution (if on Unix-like system)
|
||||||
|
if (process.platform !== 'win32') {
|
||||||
|
testSuite.log('Test 8: Script Execution', 'info');
|
||||||
|
|
||||||
|
// Test version validation script
|
||||||
|
const versionScript = path.join(AGENTS_DIR, 'scripts', 'bash', 'validate-versions.sh');
|
||||||
|
if (fs.existsSync(versionScript)) {
|
||||||
|
try {
|
||||||
|
// Make executable
|
||||||
|
fs.chmodSync(versionScript, '755');
|
||||||
|
testSuite.runScript(versionScript, 'Version validation script');
|
||||||
|
} catch (error) {
|
||||||
|
testSuite.log(` SKIP: Cannot execute version script - ${error.message}`, 'warn');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
testSuite.log('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test 9: Documentation Quality
|
||||||
|
testSuite.log('Test 9: Documentation Quality', 'info');
|
||||||
|
testSuite.testFileContent(path.join(AGENTS_DIR, 'README.md'), /## Troubleshooting/, 'README.md has troubleshooting section');
|
||||||
|
testSuite.testFileContent(path.join(SKILLS_DIR, 'skills.md'), /## Skill Dependency Matrix/, 'skills.md has dependency matrix');
|
||||||
|
testSuite.testFileContent(path.join(AGENTS_DIR, 'README.md'), /## Architecture/, 'README.md has architecture section');
|
||||||
|
testSuite.log('');
|
||||||
|
|
||||||
|
// Results Summary
|
||||||
|
testSuite.log('=== Test Results Summary ===', 'info');
|
||||||
|
testSuite.log(`Passed: ${testSuite.results.passed}`, 'pass');
|
||||||
|
testSuite.log(`Failed: ${testSuite.results.failed}`, testSuite.results.failed > 0 ? 'fail' : 'pass');
|
||||||
|
|
||||||
|
if (testSuite.results.errors.length > 0) {
|
||||||
|
testSuite.log('Errors:', 'fail');
|
||||||
|
testSuite.results.errors.forEach(error => {
|
||||||
|
testSuite.log(` - ${error}`, 'fail');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
testSuite.log(`Completed: ${new Date().toISOString()}`, 'info');
|
||||||
|
|
||||||
|
return testSuite.results.failed === 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Export for use in other modules
|
||||||
|
module.exports = { SkillTestSuite, runAllTests };
|
||||||
|
|
||||||
|
// Run tests if called directly
|
||||||
|
if (require.main === module) {
|
||||||
|
const success = runAllTests();
|
||||||
|
process.exit(success ? 0 : 1);
|
||||||
|
}
|
||||||
@@ -0,0 +1,235 @@
|
|||||||
|
/**
|
||||||
|
* workflow-validation.test.js - Integration tests for workflows
|
||||||
|
* Part of LCBP3-DMS Phase 3 enhancements
|
||||||
|
*/
|
||||||
|
|
||||||
|
const fs = require('fs');
|
||||||
|
const path = require('path');
|
||||||
|
|
||||||
|
// Test configuration
|
||||||
|
const BASE_DIR = path.resolve(__dirname, '..');
|
||||||
|
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
|
||||||
|
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
|
||||||
|
|
||||||
|
// Test utilities
|
||||||
|
class WorkflowTestSuite {
|
||||||
|
constructor() {
|
||||||
|
this.results = {
|
||||||
|
passed: 0,
|
||||||
|
failed: 0,
|
||||||
|
errors: []
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
log(message, type = 'info') {
|
||||||
|
const colors = {
|
||||||
|
info: '\x1b[36m', // Cyan
|
||||||
|
pass: '\x1b[32m', // Green
|
||||||
|
fail: '\x1b[31m', // Red
|
||||||
|
warn: '\x1b[33m', // Yellow
|
||||||
|
reset: '\x1b[0m'
|
||||||
|
};
|
||||||
|
|
||||||
|
const color = colors[type] || colors.info;
|
||||||
|
console.log(`${color}${message}${colors.reset}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
assert(condition, message) {
|
||||||
|
if (condition) {
|
||||||
|
this.log(` PASS: ${message}`, 'pass');
|
||||||
|
this.results.passed++;
|
||||||
|
return true;
|
||||||
|
} else {
|
||||||
|
this.log(` FAIL: ${message}`, 'fail');
|
||||||
|
this.results.failed++;
|
||||||
|
this.results.errors.push(message);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
testWorkflowFile(filePath, expectedName) {
|
||||||
|
if (!fs.existsSync(filePath)) {
|
||||||
|
this.assert(false, `Workflow file exists: ${expectedName}`);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = fs.readFileSync(filePath, 'utf8');
|
||||||
|
|
||||||
|
// Basic structure checks
|
||||||
|
this.assert(content.length > 0, `${expectedName} has content`);
|
||||||
|
this.assert(content.includes('#'), `${expectedName} has markdown headers`);
|
||||||
|
|
||||||
|
// Check for workflow-specific patterns
|
||||||
|
if (expectedName.includes('speckit-')) {
|
||||||
|
this.assert(content.includes('speckit-'), `${expectedName} contains speckit reference`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for proper markdown formatting
|
||||||
|
const lines = content.split('\n');
|
||||||
|
const nonEmptyLines = lines.filter(line => line.trim().length > 0);
|
||||||
|
this.assert(nonEmptyLines.length >= 5, `${expectedName} has sufficient content`);
|
||||||
|
|
||||||
|
return true;
|
||||||
|
} catch (error) {
|
||||||
|
this.assert(false, `${expectedName} - error reading file: ${error.message}`);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
validateWorkflowDependency(workflowName, workflowContent) {
|
||||||
|
// Check if workflow references existing skills
|
||||||
|
const skillReferences = workflowContent.match(/@speckit-\w+/g) || [];
|
||||||
|
const skillsDir = path.join(AGENTS_DIR, 'skills');
|
||||||
|
|
||||||
|
for (const skillRef of skillReferences) {
|
||||||
|
const skillName = skillRef.replace('@', '');
|
||||||
|
const skillPath = path.join(skillsDir, skillName);
|
||||||
|
|
||||||
|
if (!fs.existsSync(skillPath)) {
|
||||||
|
this.assert(false, `${workflowName} references non-existent skill: ${skillRef}`);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Expected workflows mapping
|
||||||
|
const expectedWorkflows = {
|
||||||
|
'00-speckit.all.md': 'Full pipeline workflow',
|
||||||
|
'01-speckit.constitution.md': 'Constitution workflow',
|
||||||
|
'02-speckit.specify.md': 'Specification workflow',
|
||||||
|
'03-speckit.clarify.md': 'Clarification workflow',
|
||||||
|
'04-speckit.plan.md': 'Planning workflow',
|
||||||
|
'05-speckit.tasks.md': 'Task breakdown workflow',
|
||||||
|
'06-speckit.analyze.md': 'Analysis workflow',
|
||||||
|
'07-speckit.implement.md': 'Implementation workflow',
|
||||||
|
'08-speckit.checker.md': 'Static analysis workflow',
|
||||||
|
'09-speckit.tester.md': 'Testing workflow',
|
||||||
|
'10-speckit.reviewer.md': 'Code review workflow',
|
||||||
|
'11-speckit.validate.md': 'Validation workflow',
|
||||||
|
'speckit.prepare.md': 'Preparation workflow',
|
||||||
|
'schema-change.md': 'Schema change workflow',
|
||||||
|
'create-backend-module.md': 'Backend module creation',
|
||||||
|
'create-frontend-page.md': 'Frontend page creation',
|
||||||
|
'deploy.md': 'Deployment workflow',
|
||||||
|
'review.md': 'Code review workflow',
|
||||||
|
'util-speckit.checklist.md': 'Checklist utility',
|
||||||
|
'util-speckit.diff.md': 'Diff utility',
|
||||||
|
'util-speckit.migrate.md': 'Migration utility',
|
||||||
|
'util-speckit.quizme.md': 'Quiz utility',
|
||||||
|
'util-speckit.status.md': 'Status utility',
|
||||||
|
'util-speckit.taskstoissues.md': 'Task to issues utility'
|
||||||
|
};
|
||||||
|
|
||||||
|
// Test suite implementation
|
||||||
|
const workflowTestSuite = new WorkflowTestSuite();
|
||||||
|
|
||||||
|
function runWorkflowTests() {
|
||||||
|
workflowTestSuite.log('=== Workflow Validation Test Suite ===', 'info');
|
||||||
|
workflowTestSuite.log(`Workflows directory: ${WORKFLOWS_DIR}`, 'info');
|
||||||
|
workflowTestSuite.log(`Started: ${new Date().toISOString()}`, 'info');
|
||||||
|
workflowTestSuite.log('');
|
||||||
|
|
||||||
|
// Test 1: Workflows directory exists
|
||||||
|
workflowTestSuite.log('Test 1: Directory Structure', 'info');
|
||||||
|
workflowTestSuite.assert(fs.existsSync(WORKFLOWS_DIR), 'Workflows directory exists');
|
||||||
|
workflowTestSuite.log('');
|
||||||
|
|
||||||
|
// Test 2: Expected workflow files exist
|
||||||
|
workflowTestSuite.log('Test 2: Expected Workflow Files', 'info');
|
||||||
|
let foundWorkflows = 0;
|
||||||
|
|
||||||
|
for (const [filename, description] of Object.entries(expectedWorkflows)) {
|
||||||
|
const filePath = path.join(WORKFLOWS_DIR, filename);
|
||||||
|
workflowTestSuite.testWorkflowFile(filePath, description);
|
||||||
|
if (fs.existsSync(filePath)) {
|
||||||
|
foundWorkflows++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
workflowTestSuite.assert(foundWorkflows >= 20, `Found at least 20 workflows (found ${foundWorkflows})`);
|
||||||
|
workflowTestSuite.log('');
|
||||||
|
|
||||||
|
// Test 3: Workflow content validation
|
||||||
|
workflowTestSuite.log('Test 3: Content Validation', 'info');
|
||||||
|
|
||||||
|
for (const [filename, description] of Object.entries(expectedWorkflows)) {
|
||||||
|
const filePath = path.join(WORKFLOWS_DIR, filename);
|
||||||
|
|
||||||
|
if (fs.existsSync(filePath)) {
|
||||||
|
try {
|
||||||
|
const content = fs.readFileSync(filePath, 'utf8');
|
||||||
|
|
||||||
|
// Check for proper workflow structure
|
||||||
|
workflowTestSuite.assert(content.includes('#'), `${filename} has markdown headers`);
|
||||||
|
workflowTestSuite.assert(content.length > 100, `${filename} has substantial content`);
|
||||||
|
|
||||||
|
// Validate skill dependencies
|
||||||
|
workflowTestSuite.validateWorkflowDependency(filename, content);
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
workflowTestSuite.assert(false, `${filename} - content validation error: ${error.message}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
workflowTestSuite.log('');
|
||||||
|
|
||||||
|
// Test 4: Workflow naming consistency
|
||||||
|
workflowTestSuite.log('Test 4: Naming Consistency', 'info');
|
||||||
|
const actualFiles = fs.readdirSync(WORKFLOWS_DIR).filter(file => file.endsWith('.md'));
|
||||||
|
|
||||||
|
for (const actualFile of actualFiles) {
|
||||||
|
if (!expectedWorkflows[actualFile]) {
|
||||||
|
workflowTestSuite.log(` UNEXPECTED: ${actualFile} not in expected list`, 'warn');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const expectedFile of Object.keys(expectedWorkflows)) {
|
||||||
|
if (!actualFiles.includes(expectedFile)) {
|
||||||
|
workflowTestSuite.assert(false, `Missing expected workflow: ${expectedFile}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
workflowTestSuite.log('');
|
||||||
|
|
||||||
|
// Test 5: Cross-reference validation
|
||||||
|
workflowTestSuite.log('Test 5: Cross-Reference Validation', 'info');
|
||||||
|
|
||||||
|
// Check if README.md references workflows correctly
|
||||||
|
const readmePath = path.join(AGENTS_DIR, 'README.md');
|
||||||
|
if (fs.existsSync(readmePath)) {
|
||||||
|
const readmeContent = fs.readFileSync(readmePath, 'utf8');
|
||||||
|
workflowTestSuite.assert(
|
||||||
|
readmeContent.includes('.windsurf/workflows'),
|
||||||
|
'README.md references correct workflows path'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
workflowTestSuite.log('');
|
||||||
|
|
||||||
|
// Results Summary
|
||||||
|
workflowTestSuite.log('=== Workflow Test Results Summary ===', 'info');
|
||||||
|
workflowTestSuite.log(`Passed: ${workflowTestSuite.results.passed}`, 'pass');
|
||||||
|
workflowTestSuite.log(`Failed: ${workflowTestSuite.results.failed}`, workflowTestSuite.results.failed > 0 ? 'fail' : 'pass');
|
||||||
|
|
||||||
|
if (workflowTestSuite.results.errors.length > 0) {
|
||||||
|
workflowTestSuite.log('Errors:', 'fail');
|
||||||
|
workflowTestSuite.results.errors.forEach(error => {
|
||||||
|
workflowTestSuite.log(` - ${error}`, 'fail');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
workflowTestSuite.log(`Completed: ${new Date().toISOString()}`, 'info');
|
||||||
|
|
||||||
|
return workflowTestSuite.results.failed === 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Export for use in other modules
|
||||||
|
module.exports = { WorkflowTestSuite, runWorkflowTests };
|
||||||
|
|
||||||
|
// Run tests if called directly
|
||||||
|
if (require.main === module) {
|
||||||
|
const success = runWorkflowTests();
|
||||||
|
process.exit(success ? 0 : 1);
|
||||||
|
}
|
||||||
@@ -1,85 +0,0 @@
|
|||||||
---
|
|
||||||
description: Run the full speckit pipeline from specification to analysis in one command.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit-all
|
|
||||||
|
|
||||||
This meta-workflow orchestrates the **complete development lifecycle**, from specification through implementation and validation. For the preparation-only pipeline (steps 1-5), use `/speckit-prepare` instead.
|
|
||||||
|
|
||||||
## Preparation Phase (Steps 1-5)
|
|
||||||
|
|
||||||
1. **Specify** (`/speckit-specify`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit-specify/SKILL.md`
|
|
||||||
- Execute with user's feature description
|
|
||||||
- Creates: `spec.md`
|
|
||||||
|
|
||||||
2. **Clarify** (`/speckit-clarify`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit-clarify/SKILL.md`
|
|
||||||
- Execute to resolve ambiguities
|
|
||||||
- Updates: `spec.md`
|
|
||||||
|
|
||||||
3. **Plan** (`/speckit-plan`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit-plan/SKILL.md`
|
|
||||||
- Execute to create technical design
|
|
||||||
- Creates: `plan.md`
|
|
||||||
|
|
||||||
4. **Tasks** (`/speckit-tasks`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit-tasks/SKILL.md`
|
|
||||||
- Execute to generate task breakdown
|
|
||||||
- Creates: `tasks.md`
|
|
||||||
|
|
||||||
5. **Analyze** (`/speckit-analyze`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit-analyze/SKILL.md`
|
|
||||||
- Execute to validate consistency across spec, plan, and tasks
|
|
||||||
- Output: Analysis report
|
|
||||||
- **Gate**: If critical issues found, stop and fix before proceeding
|
|
||||||
|
|
||||||
## Implementation Phase (Steps 6-7)
|
|
||||||
|
|
||||||
6. **Implement** (`/speckit-implement`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit-implement/SKILL.md`
|
|
||||||
- Execute all tasks from `tasks.md` with anti-regression protocols
|
|
||||||
- Output: Working implementation
|
|
||||||
|
|
||||||
7. **Check** (`/speckit-checker`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit-checker/SKILL.md`
|
|
||||||
- Run static analysis (linters, type checkers, security scanners)
|
|
||||||
- Output: Checker report
|
|
||||||
|
|
||||||
## Verification Phase (Steps 8-10)
|
|
||||||
|
|
||||||
8. **Test** (`/speckit-tester`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit-tester/SKILL.md`
|
|
||||||
- Run tests with coverage
|
|
||||||
- Output: Test + coverage report
|
|
||||||
|
|
||||||
9. **Review** (`/speckit-reviewer`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit-reviewer/SKILL.md`
|
|
||||||
- Perform code review
|
|
||||||
- Output: Review report with findings
|
|
||||||
|
|
||||||
10. **Validate** (`/speckit-validate`):
|
|
||||||
- Use the `view_file` tool to read: `.agents/skills/speckit-validate/SKILL.md`
|
|
||||||
- Verify implementation matches spec requirements
|
|
||||||
- Output: Validation report (pass/fail)
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```
|
|
||||||
/speckit-all "Build a user authentication system with OAuth2 support"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Pipeline Comparison
|
|
||||||
|
|
||||||
| Pipeline | Steps | Use When |
|
|
||||||
| ------------------ | ------------------------- | -------------------------------------- |
|
|
||||||
| `/speckit-prepare` | 1-5 (Specify → Analyze) | Planning only — you'll implement later |
|
|
||||||
| `/speckit-all` | 1-10 (Specify → Validate) | Full lifecycle in one pass |
|
|
||||||
|
|
||||||
## On Error
|
|
||||||
|
|
||||||
If any step fails, stop the pipeline and report:
|
|
||||||
|
|
||||||
- Which step failed
|
|
||||||
- The error message
|
|
||||||
- Suggested remediation (e.g., "Run `/speckit-clarify` to resolve ambiguities before continuing")
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
---
|
|
||||||
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit-constitution
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-constitution/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `.specify/` directory doesn't exist: Initialize the speckit structure first
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
---
|
|
||||||
description: Create or update the feature specification from a natural language feature description.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit-specify
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
- This is typically the starting point of a new feature.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-specify/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the feature description for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If no feature description provided: Ask the user to describe the feature they want to specify
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
---
|
|
||||||
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit-clarify
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-clarify/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `spec.md` is missing: Run `/speckit-specify` first to create the feature specification
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
---
|
|
||||||
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit-plan
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-plan/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `spec.md` is missing: Run `/speckit-specify` first to create the feature specification
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
---
|
|
||||||
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit-tasks
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-tasks/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `plan.md` is missing: Run `/speckit-plan` first
|
|
||||||
- If `spec.md` is missing: Run `/speckit-specify` first
|
|
||||||
@@ -1,22 +0,0 @@
|
|||||||
---
|
|
||||||
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
|
||||||
---
|
|
||||||
|
|
||||||
// turbo-all
|
|
||||||
|
|
||||||
# Workflow: speckit-analyze
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-analyze/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `spec.md` is missing: Run `/speckit-specify` first
|
|
||||||
- If `plan.md` is missing: Run `/speckit-plan` first
|
|
||||||
- If `tasks.md` is missing: Run `/speckit-tasks` first
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
---
|
|
||||||
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit-implement
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-implement/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `tasks.md` is missing: Run `/speckit-tasks` first
|
|
||||||
- If `plan.md` is missing: Run `/speckit-plan` first
|
|
||||||
- If `spec.md` is missing: Run `/speckit-specify` first
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
---
|
|
||||||
description: Run static analysis tools and aggregate results.
|
|
||||||
---
|
|
||||||
|
|
||||||
// turbo-all
|
|
||||||
|
|
||||||
# Workflow: speckit-checker
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user may specify paths to check or run on entire project.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-checker/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If no linting tools available: Report which tools to install based on project type
|
|
||||||
- If tools fail: Show raw error and suggest config fixes
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
---
|
|
||||||
description: Execute tests, measure coverage, and report results.
|
|
||||||
---
|
|
||||||
|
|
||||||
// turbo-all
|
|
||||||
|
|
||||||
# Workflow: speckit-tester
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user may specify test paths, options, or just run all tests.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-tester/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If no test framework detected: Report "No test framework found. Install Jest, Vitest, Pytest, or similar."
|
|
||||||
- If tests fail: Show failure details and suggest fixes
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
---
|
|
||||||
description: Perform code review with actionable feedback and suggestions.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit-reviewer
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user may specify files to review, "staged" for git staged changes, or "branch" for branch diff.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-reviewer/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If no files to review: Ask user to stage changes or specify file paths
|
|
||||||
- If not a git repo: Review current directory files instead
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
---
|
|
||||||
description: Validate that implementation matches specification requirements.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow: speckit-validate
|
|
||||||
|
|
||||||
1. **Context Analysis**:
|
|
||||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
|
||||||
|
|
||||||
2. **Load Skill**:
|
|
||||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-validate/SKILL.md`
|
|
||||||
|
|
||||||
3. **Execute**:
|
|
||||||
- Follow the instructions in the `SKILL.md` exactly.
|
|
||||||
- Apply the user's prompt as the input arguments/context for the skill's logic.
|
|
||||||
|
|
||||||
4. **On Error**:
|
|
||||||
- If `tasks.md` is missing: Run `/speckit-tasks` first
|
|
||||||
- If implementation not started: Run `/speckit-implement` first
|
|
||||||
@@ -1,51 +0,0 @@
|
|||||||
---
|
|
||||||
description: Create a new NestJS backend feature module following project standards
|
|
||||||
---
|
|
||||||
|
|
||||||
# Create NestJS Backend Module
|
|
||||||
|
|
||||||
Use this workflow when creating a new feature module in `backend/src/modules/`.
|
|
||||||
Follows `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` and ADR-005.
|
|
||||||
|
|
||||||
## Steps
|
|
||||||
|
|
||||||
// turbo
|
|
||||||
|
|
||||||
1. **Verify requirements exist** — confirm the feature is in `specs/01-Requirements/` before starting
|
|
||||||
|
|
||||||
// turbo 2. **Check schema** — read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql` for relevant tables
|
|
||||||
|
|
||||||
3. **Scaffold module folder**
|
|
||||||
|
|
||||||
```
|
|
||||||
backend/src/modules/<module-name>/
|
|
||||||
├── <module-name>.module.ts
|
|
||||||
├── <module-name>.controller.ts
|
|
||||||
├── <module-name>.service.ts
|
|
||||||
├── dto/
|
|
||||||
│ ├── create-<module-name>.dto.ts
|
|
||||||
│ └── update-<module-name>.dto.ts
|
|
||||||
├── entities/
|
|
||||||
│ └── <module-name>.entity.ts
|
|
||||||
└── <module-name>.controller.spec.ts
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Create Entity** — map ONLY columns defined in the schema SQL. Use TypeORM decorators. Add `@VersionColumn()` if the entity needs optimistic locking.
|
|
||||||
|
|
||||||
5. **Create DTOs** — use `class-validator` decorators. Never use `any`. Validate all inputs.
|
|
||||||
|
|
||||||
6. **Create Service** — inject repository via constructor DI. Use transactions for multi-step writes. Add `Idempotency-Key` guard for POST/PUT/PATCH operations.
|
|
||||||
|
|
||||||
7. **Create Controller** — apply `@UseGuards(JwtAuthGuard, CaslAbilityGuard)`. Use proper HTTP status codes. Document with `@ApiTags` and `@ApiOperation`.
|
|
||||||
|
|
||||||
8. **Register in Module** — add to `imports`, `providers`, `controllers`, `exports` as needed.
|
|
||||||
|
|
||||||
9. **Register in AppModule** — import the new module in `app.module.ts`.
|
|
||||||
|
|
||||||
// turbo 10. **Write unit test** — cover service methods with Jest mocks. Run:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pnpm test:watch
|
|
||||||
```
|
|
||||||
|
|
||||||
// turbo 11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user